giovedì 19 maggio 2016

Data Model Generation for PostgreSQL

DinGo is a simple Go code generator that is useful to produce Microservice applications starting from the database schema.

PostgreSQL supported

As I showed in the previous post DinGo produces code reading the database schema and its first version supported only the MySQL database, but now after a not so short and unfortunately not continuous work, finally the PostgreSQL database has been supported.

The current release (2.0.beta) contains all the templates and producers for providing microservices that can operate on Postgres. This DinGo release is still in beta because it requires a few tests and there are also some limitations. Postgres array types are not supported and I don't know if they will be in the future.

Reading the Information Schema

PostgreSQL offers two methods for retrieving meta-data about the table's schema using the system catalogs (pg_class, pg_user, pg_view, ...) or querying the information_schema (available since Postgres 7.4).
DinGo has adopted the second method retrieving all the tables meta-data querying the information_schema. This approach is very similar to that used with MySQL, queries on the PostgreSQL information_schema require only few changes starting from the previous ones.

Updated configuration file

DinGo configuration file requires some additional parameters to connect Postgres. This is an example of configuration 

    "Hostname": "localhost", 
    "Port": "5432",
    "DatabaseType": "Postgres", 
    "DatabaseName": "Customers", 
    "Username": "postgres", 
    "Password": "mypassword",
    "BasePackage": "",
    "OutputPath": "$GOPATH/src/",
    "ExcludedEntities": [],
    "Entities": [],
    "SkipDaoGeneration": false,
    "SkipBizGeneration": false,
    "SkipServiceGeneration": false,
    "ForcePluralResourceName": true,
    "PostgresSchema": "public"

The new property DatabaseType can now assume one of the two values "MySQL" or "Postgres" and the PostgresSchema property is used for querying the Information_Schema and get the meta-data of the selected set of tables.

sabato 19 marzo 2016

From the database schema to RESTful API with DinGo

In the previous post I had started writing about data model and code generation and the post coincided with the boot of an open source project called DinGo thought to generate the Go code for a Microservice host analysing the schema of a relational database.

It's a very SQL First approach to write code, so I don't think it should be good for all the projects, but surely there are many applications that can take advantage from this, at least those are using already defined relational databases and want to adopt a Microservice architecture and RESTful API.

What DinGo does

DinGo creates a Microservice application starting from a MySQL database schema. Dingo follows these steps during the generation process:
  1. DinGo reads the information_schema of the MySQL database
  2. it generates a set of Data Model structs used to map tables into Go structs
  3. it generates the Data Access Objects, these objects provide a set of basic operations (CRUD operations) necessary to manipulate entities
  4. it generates the View Model structs that are similar to Data Model structs but don't depend on sql package
  5. it generates Business Objects that are wrapper around the DAO objects
  6. it generates Service Objects used to expose web methods
  7. it creates the host server Web App
  8. it writes the JSON configuration file necessary to the Web App
  9. it writes the custom endpoints file used to extend the Web App endpoints

The result you get is a web API application that you just compile and run.

Architecture of the Web App

The application is composed of three layers
  • the exposure layer is formed by the Service Objects and contains the web methods
  • the business layer provides Business Objects and contains the business logic
  • the data layer that provides DAO components and is responsible for database accesses
View Model it is used by the REST API that receive and transmit data in JSON format.
Data Model is used to communicate with the DAO components and maps tables and views of the database.

How DinGo works

After reading the information about the database schema, DinGo creates the structures that contain all the information necessary to produce the Go packages that will be generated.
The information associated with packages concern the list of the structs contained in them, the methods to be included and the imports necessary to the compilation.
The information is then passed to the file generators that process the Go code templates. All templates are included in the DinGo project and they allow the creation of the Web App. 

If necessary, the templates can be modified according to the developer's requirements, just make a fork of the Dingo project on GitHub.

What DinGo produces

The generated Web App has a structure similar to this example
The project's structure respects the architecture of the application design.
The main application depends on the service package that depends on the biz package that depends on the dao package.

There is also a config.json file that contains all connection parameters, so when the application is running, it can query the database that originated all the code.

It is not mandatory to generate all the application layers, you can control which layer DinGo have to produce by setting the configuration properties SkipDaoGeneration, SkipBizGeneraion and SkipServiceGeneration.

How to customize generated code

All the objects generated by DinGo can be extended by adding custom methods, it's recommended that the new methods are going to be placed in new files, so there isn't the risk of overwriting them by running DinGo again.

There's a special file named customresurces.go that can be used to add new endpoints to the REST API, this file is created only one time and never overwritten and it contains a method that is called at startup.

All the API are exposed using Gin-Gonic, but if you want you can use another web framework. You can produce only the DAO and the Biz layers then you are allowed to use any other framework for the Web App, but in that case you have to develope the exposure layer yourself.

DAO and Biz components can be used also in batch applications that don't need to expose web methods.


DinGo quickly produces code that we should achieve alone and that is often the most repetitive part in the creation of REST API services.
But all that DinGo does is just a starting point of our Web App and then it's up to us to complete the application and comply with the requirements.

lunedì 29 febbraio 2016

Data Model Code Generation

Don't repeat yourself

Often developing applications we find ourselves writing the same code more than once and often this code regards access to data. If the data are stored on the relational database we need to implement the functions that perform the operations of create, read, update and delete (CRUD) and some other utility functions to retrieve a entity by primary key, count the entities in the table and etc.
We can write the same code for each entity or we can write the code generator that will implement it for us. For this reason code generators can be useful to produce the repetitive code that we should implement by hand, thus eliminating the tedious part of writing code.

Querying MySQL Information_Schema

In order to generate the code that accesses the entities we need to know the structure of these entities.
If the data are stored on the MySQL database we can query the "Information Schema" to know the database structure.
Information_schema contains a set of views that report information on the types of entities defined in the database. The complete description of this schema can be found in the official MySQL documentation.

Get the list of Tables and Views

The view Information_schema.TABLES contains all the tables end views defined in all the database of the MySQL server. 
This snippet of code Go extract the list of MySQL tables.

func readTables(conn *sql.DB, schema *model.DatabaseSchema) {
    q := "SELECT TABLE_NAME FROM information_schema.TABLES "
    q += "Where TABLE_SCHEMA=?"
    rows, err := conn.Query(q, schema.SchemaName)
    if err != nil {
    for rows.Next() {
        table := &model.Table{}
        err := rows.Scan(&table.TableName)
        if err != nil {
        schema.Tables = append(schema.Tables, table)
        log.Printf("Examining table %s\r\n", table.TableName)
        readColums(conn, schema, table.TableName, &table.Columns)
        for _, col := range table.Columns {
            if col.IsPrimaryKey {
                table.PrimaryKeys = append(table.PrimaryKeys, col)
            } else {
                table.OtherColumns = append(table.OtherColumns, col)

Get the list of Columns

The list of columns in a table can be achieved by querying the view Information_schema.COLUMNS.

func readColums(conn *sql.DB, schema *model.DatabaseSchema, 
                tableName string, colums *[]*model.Column) {
  q += " FROM information_schema.COLUMNS "
  rows, err := conn.Query(q, schema.SchemaName, tableName)
  if err != nil {
  for rows.Next() {
      column := &model.Column{}
      nullable := "NO"
      columnKey, extra := "", ""
      err := rows.Scan(&column.ColumnName, &column.ColumnName, 
        &nullable, &column.DataType, 
        &column.CharacterMaximumLength, &column.NumericPrecision, 
        &column.NumericScale, &column.ColumnType, &columnKey, &extra)
      if err != nil {
      //log.Printf("Examining column %s\r\n", column.ColumnName)
      if "NO" == nullable {
          column.IsNullable = false
      } else {
          column.IsNullable = true
      if "PRI" == columnKey {
          column.IsPrimaryKey = true
      if "UNI" == columnKey {
          column.IsUnique = true
      if "auto_increment" == extra {
          column.IsAutoIncrement = true
      *colums = append(*colums, column)

Create the model structs using template

Before we can generate the code that performs the CRUD operations on entities, we want to produce the code of the structures that will contain the data.
First we define objects that describe the package and structs and we map the database schema on these structs.

func ProduceModelPackage(config *model.Configuration, 
        schema *model.DatabaseSchema) (pkg *model.ModelPackage) {    
  pkg = &model.ModelPackage{PackageName: "model", 
               BasePackage: config.BasePackage}
  for _, table := range schema.Tables {
     mt:=&model.ModelType{TypeName: getModelTypeName(table.TableName), 
            PackageName: "model"}
     pkg.ModelTypes = append(pkg.ModelTypes, mt)
     for _, column := range table.Columns {
         field := &model.ModelField{FieldName: getModelFieldName(
                      FieldType: getModelFieldType(pkg, column), 
                      FieldMetadata: getFieldMetadata(pkg, column)}
          if column.IsPrimaryKey {
              field.IsPK = true
              mt.PKFields = append(mt.PKFields, field)
          } else {
              mt.OtherFields = append(mt.OtherFields, field)
          if column.IsAutoIncrement {
              field.IsAutoInc = true
          mt.Fields = append(mt.Fields, field)
  for _, view := range schema.Views {
      mt := &model.ModelType{TypeName: getModelTypeName(view.ViewName), 
                          PackageName: "model"}
      pkg.ViewModelTypes = append(pkg.ViewModelTypes, mt)
      for _, column := range view.Columns {
        field := &model.ModelField{FieldName: getModelFieldName
                 FieldType: getModelFieldType(pkg, column), 
                 FieldMetadata: getFieldMetadata(pkg, column)}
         mt.Fields = append(mt.Fields, field)
  return pkg

Now we are using a template to create the structures that contain the data of the tables:

package {{.PackageName}}

{{if .HasImports}}
{{range .ImportPackages}}import "{{.}}"
{{range .ModelTypes}}
// Data transfer object for {{.TypeName}}
type {{.TypeName}} struct {
    {{range .Fields}}{{.FieldName}} {{.FieldType}} `{{.FieldMetadata}}`
{{range .ViewModelTypes}}
// Data transfer object for view {{.TypeName}}
type {{.TypeName}} struct {
    {{range .Fields}}{{.FieldName}} {{.FieldType}} `{{.FieldMetadata}}`



Next step

Now that we got the structures that are going to contain the data we can proceed to create objects that access data (DAO), but this step will be treated in the next blog post.

All code snippets of this article have been taken from the DinGo project, an open source code generator for Go available on GitHub.

venerdì 5 febbraio 2016

OVO Key/Value Storage

Yet another Key/Value Storage ?

I started developing OVO, my own implementation of a Key/Value Storage, six month ago because I wanted to test developing a project using the Go language. In the past, I had already worked on a similar project, GetCache, but it is developed with .Net framework.

I don't think that the world lacks the Key/Value Storage, but for me it is treated to a great gym to train my Go. I have r
ealized that this project with the Go language is extraordinarily simple and so I would like to share this experience.

What is OVO

Initially, the project aimed to create a distributed cache, like Memcached, that would allow to store objects and JSON structures quickly by spreading the load across multiple servers, and could run on any kind of Linux, Windows, iOS machine. Each cached object can have a time-to-live property (TTL) set so that the removal is done automatically when it expires.

Then I realized that I needed to get a high reliability service introducing a mechanism of data replication: I decided that I would use asynchronous replication of data. Cluster nodes would communicate using RPC and data replication commands are queued on Go channels, that allows me to manage the communications between processes easily. I did not want to introduce differences between nodes, there are no Masters and Slaves nodes, they are all Peers.

I finally added some typical features of Key/Value Storage systems expanding the possibilities of OVO:
  • Update a value if it is not changed
  • Delete a value if it is not changed
  • Get and remove
  • Manage atomic counters
At the beginning, the project uses widely goroutines and channels and function closures to manage concurrent access to the storage and to perform atomic transactions. All the operations are queued in a command's channel so that the state of storage remains consistent.
For example the operation "Delete a value if it is not changed" was initially developed in this way:
// Delete an item if the value is not changed.
func (coll *InMemoryCollection) DeleteValueIfEqual(obj *storage.MetaDataObj) bool {
    retChan := make(chan bool)
    defer close(retChan)
    coll.commands <- func() {
        if ret, ok :=[obj.Key]; ok {
            if bytes.Equal(ret.Data, obj.Data) {
                delete(, obj.Key)
                retChan <- true
            } else {
                retChan <- false // values are not equal
        } else {
            retChan <- true // already deleted
    return <-retChan //wait for result
After having a discussion on Golang Reddit about the use of Mutex (thanks to that guys), I make another version of storage that make use of sysc.RWMutex.
I wanted to see what's the performance difference between an implementation that adopts only gorutines and channels and another that is using the Mutex.
// Delete an item if the value is not changed.
func (coll *InMemoryMutexCollection) DeleteValueIfEqual(obj *storage.MetaDataObj) bool {
    defer coll.Unlock()
    if ret, ok :=[obj.Key]; ok {
        if bytes.Equal(ret.Data, obj.Data) {
            delete(, obj.Key)
            return true
        } else {
            return false // values are not equal
    } else {
        return true
The use of a Mutex appears on average two and a half times faster than the use of  gorutines and channels. This large difference has led me to choose the latter.
Even if the first method is slower, an interesting result that has adopting routines and channels is the linear response of the system. Performing tests with 10000, 100000, 1000000 gorutines we get a linear growth in response times and this honor goes to the creators of Go.

Client libraries

The OVO server is accessible through RESTful API so that anyone can develop client libraries for all the languages and platforms
Until now I have been developed clients in the languages Go and C#, while the Java client is in development.
The client libraries must implement the sharding mechanism by which the data are distributed on the nodes of a OVO cluster. The distribution of data on the nodes takes place by means of a deterministic calculation of the hashcode of the key. 
The cluster of OVO nodes partitions the range of hashcode using a simple distribution policy.

What I appreciated all this

Many of the aspects of the programming language Go make it easy and pleasant the realization of distributed applications:
  • manage concurrent accesses easy
  • quick implementation of RESTful APIs midleware
  • RPC communications
  • make simple what appears complex
The obtained result is the OVO server has very good performance, especially in writing. It is easy to install and takes up few resources, it is compiled into an executable of only 10 MB.


OVO server and clients are all open source software and they are under MIT license.

OVO Key/Value Storage repository

OVO Go client library repository

OVO .Net client library repository