About HTTP and the importance of Close()

Go, the so called system language from Google (I have checked the performances against C++, go loses.), has some very comfortable features dealing with http. When a strong type language is required, which now is more often then not, I found myself drawn to it. It is simple, compile very fast (one of the points where it beats C++ with ease). Go has both simple http client and server. They both have the same comfort level that Node js has, with far better performances, data sharing, and actual multithreading (Go uses frames, but this goes beyond the scope of this post).

To open a simple get connection to a server, we can just do this:

resp, err := http.Get(address)

where resp is the response object. We can get the status code, and also the body of the request, by using

resp.Body

Server side

Server side is not much harder.

first we need a function (or more) to server content to the client:

func handler(w http.ResponseWriter, r *http.Request) {
	w.Header().Set("Content-Type", "text/text; charset=UTF-8")
  fmt.Fprintf(w, "Hello world!")
}

Then we just start the server with that function:

listening_port:=3000
http.HandleFunc("/", handler)
http.ListenAndServe(fmt.Sprintf(":%d", listening_port), nil)

In both cases, client and server, there is a serious bug: the body is not being closed.

Now this code will work, the only question is, how long. Sooner or later, a socket limit error is going to show it's face, and the program will crash. This error happens quite a bit. The usual suggestion is to increase the file limit (Linux), which will solve it, for a little while, till the new limit is met.

So what the body has to do with that? well, the body is a stream object, io.ReadCloser to be exact, and it will not pull all the content for you and just store it in a buffer. There might be a lot of it, and you might don't want to do that. Therefore, once a body was received, it must be closed.

defer to the help

Luckily enough, go has a defer keyword, which will execute a command for you upon function exit, thus relieves you from figuring out exactly where to close the Body.

here is the correct function version:

func handler(w http.ResponseWriter, r *http.Request) {
 w.Header().Set("Content-Type", "text/text; charset=UTF-8") 
 fmt.Fprintf(w, "Hello world!") 
 r.Body.Close()
}

note that here, defer was not needed. you should also call body.Close on the client side as well.

resp, err := http.Get(address)
	if err != nil {
}else{        
	defer resp.Body.Close()
}

 

Migrating from Mongo DB to Dynamo DB

Mongo is great. I'm using it in more then one project, and I love it.

Is there a real reason to switch to Dynamo db? Well, there are few:

  1. Mongo is a memory hog. This means you have to maintain pretty big instances in order to keep it fast and happy.

  2. Servers cost money. Not only the hourly fee, but maintenance as well. The lean startup couldn't probably afford these, and bigger companies might want the easy scalability that comes with Dynamo. Scaling Mongo is not hard, but it’s yet another thing to do.

  3. Serverless. Using Node JS and Serverless, allows you to run a whole infrastructure without… well… an infrastructure. This is huge as you don’t have to maintain anything.

So, how do we migrate?

  1. Indexes. While Mongo enforces record ID, Dynamo enforces at least one index, with the option of another one. Migration here is pretty easy: create the main index as string, and add a random string to it upon inserting a new record:

    function createUsersTable(callback) {
        let params = {
            TableName: TABLE_NAME,
            KeySchema: [{
                AttributeName: "user_id",
                KeyType: "HASH"
            }, ],
            AttributeDefinitions: [{
                AttributeName: "user_id",
                AttributeType: "S"
            }],
            ProvisionedThroughput: {
                "ReadCapacityUnits": 5,
                "WriteCapacityUnits": 5
            }
        }
        dynamodb.createTable(params, (err, data) => {
            callback(err, data)
        });
    }

     

  2. Dynamo is able to offer a schema of some sort. You don’t have to use it, but if you can, you can guarantee a certain consistency within the records.

  3. Dynamo requires types when storing data. But, If you’re using NodeJS, you’re in luck. AWS.DynamoDB.DocumentClient will extract these for you, resulting in an extremely similar manner to what you know and love from mongo:

    let dynamo = new AWS.DynamoDB.DocumentClient

    and inserting a record is as easy:

    function addUser(callback) {
        let params = {
            TableName: TABLE_NAME
        };
        let item = {
            user_id: rand.generate(),
            username: "Bick "+rand.generate(7),
            password: rand.generate(10),
            address: {
                home: "123 wrefwre,fwref",
                work: "wre 5whbwergwregwerg"
            }
        }
        params.Item = item;
        dynamo.put(params, callback);
    }

    Where rand is a module I've used to generate random strings (see full source link at the bottom).

  4. Running locally: Using Mongo locally is easy. it’s open source, and you can just install it. Dynamo DB is proprietary software, and you can’t get a copy of it. Amazon solved it by creating a Java version of the api backed by sqlite. You can now run a front of Dynamo db if you need to test your code. The only setup you need to do is the aws config:

    const credentials = {
        accessKeyId: "fakeAccessKey",
        secretAccessKey: "fakeSecretAccessKey",
        region: "fakeRegion",
        endpoint: "http://localhost:15000"
    };

     

For the local version of Dynamo db installing instructions: https://github.com/talreg/dynamodb-node/blob/master/README.md

The full sample project can be found here: https://github.com/talreg/dynamodb-node

 

Installing light ide for go

The new google go language has a nice IDE to program with, light IDE.

The download link is here: http://sourceforge.net/projects/liteide/

Once untared, you might go in to an issue where the program seems to start and immediately dies (in Linux). That qt's doing. remove any library (lib folder) that has qt in it's name, and you should be done.

 

Ruby on windows with mongodb

setting up ruby for windows, like anything windows these days, is more annoying then the other environments, especially Linux. Here are some key point I hope will save you some time:

  1. Ruby installer for windows, is only the start. it's located here. while you're there, don't stop with the installer: make sure to download the development kit (Development Kit section). You'll need it later on. make sure to extract it in a simple path, e.g. c:\devkit or alike. don't use spaces or special characters.
  2. Once ruby is installed, lets check the gem operation: if you can run gem update –system without an error – great, but if not, here is what you need to do: download the pem file here and save it in your rubygems/ssl_certs/ folder. now, the command should be executed correctly.
  3. lets update the system with

     

     

    gem update

     

  4. To install mongo, lets run

     

     

    gem install mongo
    
    gem install bson_ext

     

  5. The last one will install bson in C which is much faster. great? sure, but it's not going to work (Windows). so now what? first, lets go to the install folder of this gem (..lib/ruby/gems/[version]/gems/bson_ext[xxx]/ using cmd.
  6. once there open the cbson.c file that is located inside of ext/cbson folder. make sure that you have a reference to winsock2 and not arpa/inet. note that it is existed in more advanced versions, so if it's there, you don't need to change it. This is how it should looks like:

     

    #ifdef _WIN32
    #include <winsock2.h>
    #else
    #include <arpa/inet.h>
    #include <sys/types.h>
    #endif

    note that if you already have this file, your installation might actually work, so you can skip directly to the test code below.

  7. next, you need to setup your devkit installment, so go to your devkit folder, and run

    ruby dk.rb init

    . this will generate the config.yml file in that folder.

  8. edit this file, making sure that it contains the ruby path at its end, like this: – c:/ruby. note the spaces and the backslash. these are not typos.
  9. next run ruby

    dk.rb install

    .

  10. in your command window that is in the gem folder, run gem build bson_ext.gemspec.
  11. move the new gem c
  12. delete the entire bson_ext gem folder
  13. Run:

    gem install bson_ext-1.11.1.gem --local

     

  from within the folder you've saved that gem.

Starter code:

require 'rubygems'
require('mongo')
puts('testing mongo...')

if you can run this code without an error or a mongo warning claiming that you are not using bson_ext, you are good to go!

 

Static constructor in NodeJS objects

It's not hard to create an object in JavaScript and therefore in NodeJS. However, if you're an advanced user, you probably devil a little with static objects (Factories will be the definite example). Since JavaScript doesn't really comes with static constructors, we need to take advantage of NodeJS require feature.

About require

require is a nodejs specific. The cool thing about it, is that it is only called once. so, every code that executed there, is executed only once, what's making it a perfect place for static initialization.  Lets take a look at some code:

function Counter()
{
this.counter=0;
Counter.__counters++;
}

What we have here is a simple increase of the number of objects created. However, since Counter.__counters is not defined, we will get an error. sure, we can check that this variable exists in the object constructor, and currently this is not a big issue, but if the test is a timely manner or a costly one, we have a problem. Using NodeJS feature, we can solve it easily:


Counter.__counters=0;

function Counter() 
{ 
this.counter=0;
Counter.__counters++; 
}

The first line will be called only once, and thus make it a static constructor. This line can be replace in a function call, if we wish to make it neater, and the effect will remain the same.