Archive

Archive for the ‘zero-VM’ Category

How zerocloud resolves zerovm app and input file located on different nodes

Suppose, I have a swift cluster of 3 nodes (node1, node2, node3). I have a zerovm app (zapp)  that uses data.txt as an input file.

Now, if the zerovm app is deployed in node1, input file is stored in node2, how the request to execute the zapp is resolved in terms of forwarding request to which object server?

To explain the situation, the proxy server may discover that the zapp and input file are on different nodes. It retrieves actual location (host address and absolute path on the filesystem) of the files and  one of the following may occur:

1. proxy-server forwards the request where the input file is – node2 in this case. Node2 retrieves zapp file from node1, and zerovm is launched at node2.

2. proxy-server forwards the request where the zapp is – node1 in this case. Node1 retrieves input file (data.txt) from node2 and zerovm is launched at node1.

In fact, option 1. make more sense in the sense that input file can be larger than the zapp. Yes, that’s what will happen.

By default it will decide using following logic: first channel with read-only data and not the script/executable/image will win. There is a more complex behaviour when there are writeable channels and other constructs.

On the other hand, you can always co-locate everything with specific channel by using “attach” property of the node. I.e. “attach”: “stdin” will attach everything to the node where the “stdin” channel is located.

 

Reference:

1. https://groups.google.com/forum/#!topic/zerovm/tpbiNDeHw_I

Categories: openstack, zero-VM

g++: internal compiler error: Killed (program cc1plus) while building zerovm inside a vagrant box

While Installing zerovm toolchain (build toolchain section from https://github.com/zerovm/toolchain/blob/master/README.md) inside a vagrant box  I came across following error:

 

       g++: internal compiler error: Killed (program cc1plus)

 

it turns out that [1]  that that was because default vegrant box is 360 MB. and 360 MB of ram is not enough for building the toolchain. So, you need to change the vagrant box ram to 1024.

 

This is how you can change vagrant box ram size[2]

config.vm.provider :virtualbox do |vb|
  vb.customize ["modifyvm", :id, "--memory", "1024"]
end



does it give you a luck ?

References:
[1] https://bitcointalk.org/index.php?topic=304389.0
[2] http://stackoverflow.com/questions/12308149/vagrantfile-how-do-i-increase-ram-and-set-up-host-only-networking
Categories: zero-VM

Running ZeroVM with ZeroCloud Middleware.

April 13, 2014 2 comments

Running ZeroVM App with ZeroCloud Middleware:

I have used both swift and Curl commands. I have used temp_auth module for authentication ( instead of Keystone / SWauth / related things). In my swift admin account (swift account name: AUTH_admin) I have uploaded a input file (named input.data which is not being used by my app) and a output file (output.data) where output will be written.

My executable file is (sizeof.nexe) which prints size of different zerovm supported data types. Corresponding C program is available here [1]

My job description file is named (sizeof.json) which has the following content:

   [

  2         {

  3             “name”:“sizeof”,

  4             “exec”:{“path”:“swift://AUTH_admin/zerovm/sizeof.nexe”},

  5             “file_list”:[

  6                 {“device”:“stdin”,“path”:“swift://AUTH_admin/zerovm/input.data”},

  7                 {“device”:“stdout”,“path”:“swift://AUTH_admin/zerovm/output.data”},

  8                 {“device”:“stderr”}

  9             ],

 10             “args”:“1048576”

 11         }

 12     ]

Put attention how the path for executable and files look like. If you do not specify path in right syntax it won’t work.

Here is the command I have used to upload input, output and executable files and then run the executable with Job description files.

#Creating container :

 ‘zerovm’ container is the place where all my files are uploaded:

swift  -A http://localhost/auth/v1.0/ -U admin:admin -K admin post zerovm

#Upload input, output and executable files:

my both input and output  files are empty .

swift  -A http://localhost/auth/v1.0/ -U admin:admin -K admin upload zerovm output.data

swift  -A http://localhost/auth/v1.0/ -U admin:admin -K admin upload zerovm input.data

swift  -A http://localhost/auth/v1.0/ -U admin:admin -K admin upload zerovm  sizeof.nexe

# Give read/ write permission to the container:

I am using Curl command for the next command. So, I need to generate a token for further curl command. I use tempauth to get token & Storage URL

curl -v -H  X-Auth-User:admin:admin -H  X-Auth-Key:admin  http://localhost/auth/v1.0/

#copy and paste STRORAGE URL & TOKEN from the output of last command

export STORAGE_URL=http://localhost/v1/AUTH_admin

export TOKEN=AUTH_tkc20cf42128854625bf2a46a6dae47af6

The token won’t be same for your case. So you have to get yourself a valid token for you.

giving world permission to the container which is giving the over permission. For for the time being it is okay.

curl -i -XPOST -H “X-Auth-Token:$TOKEN” -H “X-Container-Read:*.*” \

    http://localhost/v1/AUTH_admin/zerovm

 curl -i -XPOST -H “X-Auth-Token:$TOKEN” -H “X-Container-write:*.*” \

    http://localhost/v1/AUTH_admin/zerovm

# Uploading & Executing Job files:

curl -X POST -i -H “X-Auth-Token: $TOKEN” -H “X-Zerovm-Execute: 1.0″ -H “Content-Type:application/json” T sizeof.json $STORAGE_URL/zerovm/sizeof.json

Seeing the result:

In my case , I have written the output to the file output.data in which case I see the following responses by  executing the job file.

HTTP/1.1 200 OK

Date: Sun, 13 Apr 2014 18:17:40 GMT

X-Nexe-Retcode: 0

X-Nexe-System: sizeof

X-Nexe-Cdr-Line: 0.052, 0.024, 0.00 0.00 1 921 36 1047 0 0 0 0

X-Nexe-Validation: 0

X-Nexe-Etag: /dev/stderr d41d8cd98f00b204e9800998ecf8427e

Etag: fa0833ba7d5be12b11dd565d8074de30

X-Nexe-Status: ok

Content-Type: text/html; charset=UTF-8

Content-Length: 0

Via: 1.1 127.0.1.1

Vary: Accept-Encoding

Boom!! Now my  (hopefully yours too)  Zerovm app is running through zero cloud middleware.

References:

[1]. https://github.com/zerovm/zerovm-2.0/blob/master/tests/functional/demo/sizeof/sizeof.c

[2]. https://github.com/zerovm/zerocloud/tree/icehouse/doc

Categories: openstack, zero-VM

ZeroVM data processing model

Data processing model of zerovm is quite interesting which intrigues me to write this note.
The conventional paradigm of data processing in the could is the isolation of data and application. For example, in conventional cloud, we have a compute node or VM running in some place in the cloud ( for example on Amazon Cloud) while data may reside on some other place (for example openstack object storage, swift). Now in order to process, the data have to be fetched down from the data storage to the Compute node. In simple word, in conventional cloud, data moves the application before processing.

On the other hand, ZeroVM works the other way around. Instead of data moving to the VM, application or query moves the data storage. and on the storage side, a light weight VM is launched along with the data and moved app (or query).

data_processing_zero-vs-traditional_model

Its worth to mention here that, in order to leverage this new data processing model, openstack object storage “swift” is on its way to design and bundle zero-vm facilities probably in their next release which really claims the applicability of this model.

Categories: openstack, zero-VM Tags: ,

Introduction to ZeroVM

There are a whole lot of issue going on with Zero VM – an emerging virtualization technique being adopted by OpenStack promised to bring much benefits to existing cloud storage (swift) infrastructure.


what is ZeroVM:

ZeroVM is a new virtualization technique supposed to be very light weight, fast and short in lifetime. For example, while traditional hypervisors (ex. kvm, xen) take 1 or 2 seconds to be loaded, zerovm supposed to be loaded in couple of milliseconds. On the other hand, while existing VM runs and serves many request, zerovm only serve one request, then it dies and never been used again. ZeroVM take advantage of this philosophy. When a zeroVM is loaded, it loads minimum files and creates minimum environment to serve that request. For example, if your zerovm only serves one python script,
it needs only python environment. No need for other fancy things. More interestingly, zerovm does not even implement TCP / IP stack.

One other interesting aspect of zerovm is its deterministic support. As Convention VM runs tons of services and applications, when it fails or create a fault, it may be hard to reproduce the fault. For example, if the fault is due to race condition of running applications, it is likely that the same environment does not re-produce same fault in later runs. May be this is one reason, debugging in cloud app is hard . ZeroVM solution for this is its deterministic environment. ZeroVM achieves this determinism through its single thread model. What it means – zeroVM only run one application in single thread. So, there would be no race condition, no SIGNAL to interrupt the thread. So, life is easy and nice for debugging in ZeroVM execution.

You may find more document of zerovm in the feference section.

References:
http://zerovm.org/index.htm

https://github.com/zerovm/zerovm

Categories: openstack, zero-VM Tags: ,