Pages

Monday, 4 August 2014

The Internet of Things

Synthetic and comprehensive definition of The Internet of Things:

#‎IoT‬ paradigm = sensors + network + actuators + local and cloud intelligence + creative UI

(Read the full article: #IoTH: The Internet of Things and Humans)

The most interesting aspect is the concept of “intelligence”. In the near future, the AI (Artificial Intelligence) will be pervasive, not only applied to a restricted group of highly specialized applications.
The current devices, despite the name "smart" (SamartTV, Smartphones, SmartWatch etc.), are quite stupid. They are only capable of handling complex input and output (speech recognizers and synthesizers, 3D, NFC, DLNA, BigData searches), but almost incapable of understanding meanings, taking decisions and learning.
In the near future I expect a widespread diffusion of the artificial intelligence in the daily life; each device will be able not only to receive inputs, explicitly (I press the button) and implicitly (I leave the home), but also to take appropriate decisions.
In my opinion, the next developments in the computer science will be the artificial intelligence, logic and algorithm engineering.


Read More »

Saturday, 14 June 2014

The Economics of the Cloud

Great books and articles are created to last. Recently, I had the opportunity to reread this article created about 4 years ago but sill full of insights.

http://www.microsoft.com/en-us/news/presskits/cloud/docs/the-economics-of-the-cloud.pdf

I really like the original incipit:
“When cars emerged in the early 20th century, they were initially called horseless carriages. Understandably, people were skeptical at first, and they viewed the invention through the lens of the paradigm that had been dominant for centuries: the horse and carriage.”
This is the key to unfold the power of Cloud, free ourselves from old paradigms and think different.


The economics are always important drives of any innovation and the conclusions of this paper are clearly a powerful motivator behind many IT changes.

Good Reading!

Read More »

Friday, 13 June 2014

OpenStack from a Microsoft perspective

Big tech events are always an invaluable source of information.  My favorites are TechEd, Build, OpenStack summit, BlackHat, Def Con, Velocity, etc… It is not realistic to attend in person all events around the world but online you can easily google for videos, slides, source code and any kind of information.

In the last TechEd 2014 conference, sessions available here, something grabbed my attention:  Microsoft CloudOS vs OpenStack. I could not resist, so I started to watch…..

The presentation starts with an interesting overview of the current cloud platform ecosystem.
The slide below shows a comprehensive view of all the releases of the different platforms:


Then it continues with a comparison of the different communities……


 Finally, there is also a side by side feature/releases comparison of the two platforms:




The presentation ends with the following considerations:
“A successful CloudOs deployment may make you feel like a Rocket Scientist….”
“A successful OpenStack deployment will meke you feel like a Nuclear Physicist….”

I intentionally don’t explain or comment the above sentence because I invite you to watch the full presentation at 


Have fun and stay tuned!

Read More »

Thursday, 8 May 2014

Cloud Architecture Revolution



The introduction of cloud technologies is not a simple evolution of existing ones, but a real revolution.  Like all revolutions, it changes the points of views and redefines all the meanings. Nothing is as before.  This post wants to analyze some key words and concepts, usually used in traditional architectures, redefining them according the standpoint of the cloud.  Understanding the meaning of new words is crucial to grasp the essence of a pure cloud architecture.

<<There is no greater impediment to the advancement of knowledge than the ambiguity of words.>>
THOMAS REID, Essays on the Intellectual Powers of Man
Nowadays, it is required to challenge the limits of traditional architectures that go beyond the normal concepts of scalability and support millions of users (What's Up 500 Million) billions of transactions per day (Salesforce 1.3 billion), five 9s of availability (99.999 AOL).  I wish all of you the success of the examples cited above, but do not think that it is completely impossible to reach mind-boggling numbers. Using cloud technology, everyone can create a service with a little investment and immediately have a world stage.  If successful, the architecture must be able to scale appropriately.

Using the same design criteria or move the current configuration to the cloud simply does not work and it could reveal unpleasant surprises.

Infrastructure - commodity HW instead of high-end HW
This is the first shocking concept to understand about the cloud architectures. Forget about high-available and high-performance hardware. The largest cloud services providers like Amazon or Azure use commodity hardware so prone to breakdowns and other problems more than the traditional environments.
Anyway, this kind of hardware does not prevent the use of the cloud for the enterprise applications with high availability and high-performance requirements, it is simply necessary to redesign the architecture according to different criteria of resilience and distribution.

Scalability - dynamic instead of static
The scalability is one of the important features of a good architecture design even in the traditional approach. Anyways, the scaling of an architecture was an exceptional event as it requires planning, budget allocation, new HW purchase and systems reconfigurations. This leads to implement very stable architectures over the time and sized for the maximum load that the application or service must support.
In the cloud all the resources have been dematerialized and they may be requested on-demand and paid by the hour. The HW resources are managed to a high level of abstraction through the API (Application Program Interface) so you can automate the provisioning and de-provisioning of any resource.
Nowadays architecture is required to adapt according to the demands and be able to expand by adding new nodes or contract. It must be possible to use a high number of nodes during peak hours or some periods of the year and fewer during low usage. Obviously, the architecture must be aware of these variations and change accordingly.

Availability - resiliency instead of no failure
The traditional architectures consider the resources always available. The unavailability of a resource is neither a desirable event nor planned, in other words, it is an anomaly. In the cloud, the unavailability of a resource is no longer an exceptional event due to the use of commodity HW or because of a resource, such as an application server, is not available due to a sudden and wanted downsize.
The architecture must be able to handle the transient failure: in case of problems should be programmed to react properly and complete the transaction. In other words, it must be resilient.
In the most advanced cloud architectures, there are some agents designed to generate failures deliberately in the production environments to ensure that the application is able to respond appropriately. Famous are the Chaos Monkey of Netflix.
A good way to get the resilience is to implement a system of queues in order to make the components of the application loosely coupled and autonomous.
In the example below, the web role puts the requests in the queue that are handled by the worker role. When the worker role takes over a request, it is not completely removed even if it is no longer available for processing by other worker roles. In case the processing worker role does not provide feedback within a certain timeout, the worker role ended suddenly, the request appears in the queue again and it is available for another worker role.



Performance - decomposition and distribution instead of stress
The cloud architectures are quite sensitive to performance constraints inherent in the technology itself such as the non-proximity of resources, the commodity hardware or the resource sharing. Methods related to the stress of the resource are no longer usable like increasing the number of IOPS, bandwidth, processor speed or the size of ram. In the cloud, the scale-up is very limited and in some cases impossible. Rather than strengthen a single component of the architecture, the decomposition of the architecture in different modules deployed across multiple nodes is the key to improve the performance.
Traditionally, a system is considered scalable if and only if, increasing the requests and the nodes, the performances remain the same. Scalability is defined as a function of performance.
In the new cloud architectures, the ratio should be reversed and the scalability must be folded to achieve better performance. The idea is to subdivide computational cost in order to perform better.
This approach must be adopted not only on the application layer, but also on the data layer by means of database sharding that allow distribution of the database and consequently the load on multiple nodes that take charge of the management of specific portions data.
Even a wise use of cache systems at all levels (application with memcache or Amazon's ElasticCache, network with cache on firewalls and contents with CDN) will certainly help to improve the performance of the architecture.


Reliability - MTTR instead of MTBF
Traditionally, the reliability of an architecture is related to the concept of the MTBF, the mean time between a failure and another, in other words, how much wider the period the more reliable the system. By using commodity servers, this can no longer be considered an achievable goal. Hence, it is necessary to review the concept of reliability by connecting it to the resiliency. A cloud architecture is much more reliable when lower is the MTTR i.e. mean time to recovery. MTTR equal or close to zero would give a highly reliable architecture and resilient to any failure.


Capacity Planning - scale unit planning instead of  worst case planning
The capacity planning in traditional architecture is designed to estimate the sizing required to support the maximum load so that the system is able to respond well enough even in the worst situations. This inevitably leads to waste a lot of resources for the most part of the life cycle of systems. In contrast, the capacity planning in the cloud architecture is oriented to determine the scale unit, i.e. the load enough to scale up or down, possibility automatically.









We have seen how cloud architectures diverge from everything done in the past because it is necessarily respond to different scenarios. Many of the approaches used in the past are no longer usable and the fundamental concepts underlying architectures such as scalability, performance, reliability, and capacity planning are deeply reviewed. To summarize the golden rules for a correct design of a cloud architecture, we can definitely identify:

Enable Scaling - be able to adapt to environmental conditions

Expect Failure - be able to adopt a resilient attitude

Always keep in mind the above rules during the design and the implementation of real cloud architecture.

I conclude here this long post, but in the future it could be interesting to analyze some design techniques typical of the cloud world.
Have fun!


Read More »

Friday, 18 April 2014

OpenStack Compendium - The Architecture

In this post we continue our journey in OpenStack with a presentation of the high-level architecture.

OpenStack is not based on a single huge open source project but it has been architected and designed using a modular approach where each component is developed independently by a dedicated project.
This is the big picture of the OpenStack architecture including every module and the relation with the other components:




Because OpenStack is highly configurable, it is possible to create a tailored solution using only a subset of modules in order to cover the specific needs. For this reason, the deployments range from a single server using a single module based on Glance or Cinder to very complex datacentre solutions based on multiple interrelated modules configured in high availability mode. You can also install the entire stack in a single box just for development or evaluation purposes.

Compute: (Project NOVA)
Compute is the virtual machine provisioning and management module. It controls many aspects of the virtualization platform like networking, CPU, storage, memory, creating, controlling, and removing virtual machine instances, security, and access control.
Compute offers a wide range of supported virtualization platform ranging from open source to proprietary


ProductInterfaceDescription
Kernel Virtual Machine (KVM)LibvirtMost popular technology for small scale deployments. Supports advanced operations such as live migration and resize.
XenLibvirtMost popular (along with XCP/XenServer) technology for larger scale and production deployments.
Citrix Xen-ServerXenAPICitrix’s commercial version of Xen-based virtualization product. Supports advanced features.
Xen Cloud Platform (XCP)XenAPICitrix’s open source version of XenServer. Supports a subset of XenServer features.
VMware ESX / ESXi / vSpherevSphere APIVMWare virtualization platform.
User Mode Linuxlibvirt Generally considered a lower performance virtualization option.
Microsoft Hyper-V Windows Management Instrumentation (WMI)Microsoft’s hypervisor-based virtualization technology.
QEMULibvirtProvides the basis for most Linux-based virtualization technologies (such as KVM and Virtualbox).
Linux Containers (LXC)LibvirtLXC is an operating system-level partitioning technology that allows for running multiple isolated servers (containers) in a single kernel.

The module implements own solutions to manage networking and storage but it can also be integrated with the other modules of the architecture.

Object Storage (Project SWIFT)
Object Storage is a distributed storage system for static data such as generic files, multimedia or virtual machine images. There are multiple layers of redundancy and automatic replication, so a failure in a node doesn't result in data loss, and recovery is automatic. It uses the concepts of Zone and Region in order to write multiple copy of the file in isolated storage (i.e. different servers or different racks up to different datacentres).
It is able to scales horizontally by adding new servers to increase the storage and replication.

Block storage (Project CINDER)
Block storage is essentially a management system of volumes used by OpenStack virtual machines. It supports the snapshots in order to back up data. Snapshots can be restored or used to clone a block storage volume. The volumes can be dynamically attached and detached to\from virtual machine, but multiple machine cannot attach the same volume.

Networking (Project NEUTRON)
OpenStack provides a flexible networking layer for the cloud. It lets the management of multiple networks, control traffic and IP addressing, connect servers and devices to one or more networks.
The Neutron abstracts the management of networking services by using a pluggable backend architecture responsible for interacting with the underlying infrastructure. 
A large number of plug-ins are available and include:

Open vSwitch Plugin
Cisco UCS/Nexus Plugin
Linux Bridge Plugin
Modular Layer 2 Plugin
Nicira Network Virtualization Platform (NVP) Plugin
Ryu OpenFlow Controller Plugin
NEC OpenFlow Plugin
Big Switch Controller Plugin
Cloudbase Hyper-V Plugin
Brocade Neutron Plugin Brocade Neutron Plugin
IBM SDN-VE Plugin

OpenStack networking implements a framework to extend the capability with intrusion detection systems (IDS), load balancing, firewalls and virtual private networks (VPN).

Identity services (Project KEYSTONE)
Keystone provides identity management with authentication and authorizations services using a central repository of users mapped to the OpenStack services they can access.
OpenStack identity integrates with existing backend directory services such as LDAP. It supports several forms of authentication including username and password, tokens and AWS-type logins.

Image services (Project GLANCE)
The OpenStack Image Service manages virtual disk images. The images can be used as a template to get new virtual servers up and running. It can also be used to store and catalogue an unlimited number of backups. Glance supports many different storages and formats.


Image StoreDescription
FileystemStores, deletes, and gets images from a filesystem directory or shared drive (e.g., NFS).
HTTPRead-only image store access to get images using URL.
Swift Stores, deletes, and gets images from a Swift installation.
S3Deletes or gets images (but not stores) from Amazon’s S3 service


Disk FormatDescription
Raw Unstructured disk format
VHDMost common format
VMDKVMware Format
qcow2QEMU image format, native format for KVM and QEMU.
VDIOracle VM VirtualBox virtual disk image format.
ISOArchive format for optical disks.
AMI, ARI, AKIAmazon machine, ramdisk, and kernel images (respectively).


Container FormatDescription
OVFAn open standard for distributing one or more virtual machine images. Read more about this standard at http://www.dmtf.org/standards/ovf.
aki, ari, amiAmazon kernel, ramdisk, or machine image (respectively).
BareNo container for this image.

Metering and Monitoring services (Project CELIOMETER)
The primary goals of this module are monitoring and metering, but the framework is easily expandable to collect for other needs. It can be used as unique point of contact for billing systems to acquire all of the measurements they need to establish customer billing, across all current OpenStack core components.

Dashboard (Project HORIZON)
The dashboard provides a web-based interface to many of the OpenStack services including Nova, Swift and Keystone.
Horizon is based on a Django module called django-openstack. Django is a high-level Python Web framework to create web application (https://www.djangoproject.com/ ).

Orchestration (Project HEAT)
Heat provides a framework to manage the entire lifecycle of infrastructure and applications within OpenStack clouds. Heat provides a template based orchestration for describing a cloud application by executing appropriate OpenStack API calls to generate running cloud applications.
The templates are based on AWS CloudFormation used for creation of most OpenStack resource types (such as instances, floating ips, volumes, security groups, users, etc), as well as some more advanced functionality such as instance high availability, instance autoscaling, and nested stacks.

This is a high-level presentation of the OpenStack architecture in order to describe the main features and capabilities of each single module. Each module could be further drilled-down to illustrate the architecture and the internal work, but it is not the scope of this post.

Now, it is really the time to put your hands-on!

Read More »

Sunday, 13 April 2014

OpenStack Compendium

I'm a great fan of Amazon Web Services, perfect for creating pure cloud architectures without compromises, and I follow with great interest the evolution of Microsoft Azure. Anyway, both are proprietary platforms, but what about open platforms?
The open approach has always been a revolutionary concept in the computer history, it decided the success of the platforms and technologies: the success of DEC with PDP was also due to the publishing of detailed specs about inner workings and interfaces; the IBM and the era of  PC-compatible computers based on an open architetture and non-proprietary components; the use of the C language and the openness of the source code by AT&T; the GNU\Linux and the OpenSource movement; the open approach of the Java architecture with "write once and execute anywhere"…..

Recentlty, I had the opportunity to evaluate this compelling cloud platform so I decided to share with you some notes taken during my investigations.

The OpenStack Project aims to create an open source cloud-computing platform for public and private clouds aimed at scalability without complexity. One of the defining core values behind the project is its embrace of openness with both open standards and open source code.

OpenStack has been released under the Apache 2.0 license. Since that first release it has grown into a large community supported by over 12,000 contributors in nearly 130 countries, and more than 850 companies including Red Hat, Canonical, IBM, AT&T, Cisco, Intel, PayPal, Comcast, CoudWatt (see http://www.openstack.org/foundation/companies/ ).

     13/11/2013 

The platform is open and designed to support a broad range of products, technology and standards. It has no proprietary hardware or software requirements, and it integrates legacy systems and third-party products. It also provides a management API with a compatibility layer with the Amazon Web Service platform and Eucalyptus.

The OpenStack is a collection of open source technologies delivering a massively scalable cloud operating system. It has been architected and designed using a modular approach where each component is developed independently by a dedicated project.
A specific OpenStack implementation may leverage of all components or only a subset according the specific needs. The cloud platform provides a full set of feature required to cover any aspect of a cloud service.

Self-management
The easy provisioning (and deprovisioning) of computing capabilities according the needs automatically, without requiring human interaction with each service provider, allows a formidable way to respond just-in-time to any change required by the business.
The on-demand self-service management includes virtual instances, network, VPN, etc…

Easy VM provisioning
The platform includes a complete images management system with a catalog service for storing and querying virtual disk images. The images can be used as a template with a preconfigured application to get new virtual servers up and running in a very short time.

Metering and reporting for billing
The metering capabilities allow keeping a detailed track of the resource usage in order to adopt a pay-per-use approach based on consumed resources.

Monitoring
An extensive monitoring system allows controlling the infrastructure in order to identify exactly the system usage and carefully plan for future expansion. The on-demand resource provisioning can also be used to cope with peak periods when the service is heavily loaded.

Automation
The entire platform can be fully managed by using a broad set of API. The instance provisioning/deprovisioning, network configuration and monitoring can be fully managed programmatically. This reduce drastically the effort to manage the infrastructure and reliability of the delivery processes.

Interoperability and Portability
The Open Source approach of the architecture facilitates a seamless integration of different platforms and technologies. It provides the support to a wide range of visualization platforms ranging from proprietary software (VMWare ESXi and MS Hyper-V) to open source solution (KVM or Citrix XenServer). Additionally, the support of the project by the best of breed HW vendor like Intel, HP, IBM, Dell and Cisco guarantees a great level compatibility.

The Open Stack is one of the most promising platform for cloud computing due to the open source nature and the wide adoption for the cutting the edge technology providers:
 - IBM: cloud offer, IBM Smart Cloud, based on Open Stack project. Last year IBM aquired Softlayer which is based on a different platform, CloudStack, but currrently it is working on a  compatibility layer called Jumplayer (http://blog.softlayer.com/2014/building-bridge-openstack-api)
 - HP: enterprise-grade public cloud based on OpenStack technology
 - Canonical: release of Ubuntu Open Stack solution for private cloud
 - Red Hat: Red Hat's OpenStack Cloud Infrastructure Partner Network is the world's largest commercial OpenStack ecosystem
 - Rackspace: worldwide leading provider of cloud service based on OpenStack
The first class contributors embracing the project guarantees a stable development of the platform. The top corporate contributors are Red Hat, HP, Rackspace and IBM (http://activity.openstack.org/dash/releases/index.html?data_dir=data/havana ). The Havana release has been developed with the contribution of 933 developers.
The roadmap is very challenging with new features added every release (2 releases yearly at April and October). The community around the project is very active and spread around the world including Italy and France. Many conferences are dedicated to OpenStack (14 in the 2013) with a great participation. More than 3,500 people attended the 2013 OpenStack Summit in Hong Kong.

All the premises are very good, but what the future holds?
Read More »

Saturday, 29 December 2012

Renting a flat or buying a cottage

Embrace the cloud computing

The Cloud Computing is a topic that I often endeavour to explain even to IT specialists. According to the definition provided by Wikipedia, <<Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over a network (typically the Internet)>>.
Furthermore, the NIST enriches the definition presenting the 5 characteristics required by cloud services and proposing three service models (Cloud Software as a Service (SaaS), Cloud Platform as a Service (PaaS), Cloud Infrastructure as a Service (IaaS)). Although the above definitions are very precise and well structured, there is evident difficulty in understanding the concepts behind the cloud computing and identify the key factors to carefully assess before embrace a cloud service.

In these situations, when you must communicate complex concepts in a simple way, the use of a simile often comes in handy. The classical approach to IT management compared to the cloud approach may be considered similar to renting a flat in a city against buying a cottage in the countryside.

Using a cloud service is like living in a flat. It is possible to live quietly with tidy and respectful neighbours or face coexistence challanges (instances/tenants using the same HW). It is necessary to deal with shared services such as stairs, balconies and garden (computing, storage and networking) and be sure that the residents comply with building regulation (QOS policy of the provider). Obviously, a building located in the downtown residential area is definitely more reliable than one located in a suburb.

Estimate the Provider Reputation
How long does the provider operate in the business?
What is the provider history?
What are the clients? Case studies and Testimonials?
There will be in the next 5 or 10 years?

At the same time, it is important to look carefully at the security features provided like the alarm system, the armoured door, the surveillance. In the case of an owned house, I could setup all the stuff to guarantee an acceptable security. In a rented apartment, I can only accept the current security systems or negotiate additional security with the flat owner.

Request a Trustworthy Service
What is the Service Level of Agreement (SLA)?
What is the guaranteed uptime? Yearly or monthly?
Are there service credits in case of failure?
What are the certifications and accreditations owned by the provider?
ISO 27001? SAS70? PCI?

Moreover, a flat in the city enables us to benefits from services and opportunities not available or affordable in countryside. For example the proximity to the work place (provide global services with low latency by using CDS), mobility services (access the service from everywhere using Internet and from every device), social services and security services.

Take Advantage of the New Opportunities
What is the full range of services of the cloud provider?
Rethink the current in-house service; do not just move to the cloud.
May I benefit of additional security or availability?

A rented flat lets us easily move to another location (provider change) if our needs change and are no longer satisfied by the current accommodation. Another reason for a forced relocation happens if the owner simply goes bankrupt.

Keep your door open
Adopt service loose coupling principle for provide/api/platform
Are used Open Standard or widely used technologies?
Are there procedures or standard data formats or services interfaces
that could guarantee data, application and service portability?
Are there additional costs for data export?
Rent an apartment in town definitely brings undoubted advantages such as the minimal front-end investment required due to the possibility of changing flat as long as the needs change (Rapid Elasticity). There are no maintenance costs. You can also enjoy common services (Resource Pooling). Living in the city the mobility is simplified by using subways, trains and taxi (Broad network access). Anyways, as long as the needs increase requiring a house of considerable size, a car, a garage, the costs can considerably increase.

Save money redesigning your service according the cloud principles
Take advantages of no upfront-costs.
Take advantage of the elasticity, no spare resource.
Take advantage of the reduced operational costs.
Adopt the KISS approach.
Consider that the cost may be greater than colocation or in-house on large scale.

The Cloud Computing is an interesting opportunity for IT. It unleashes development and operation functions from the chains of the physical equipment. It allows redesigning the service according to the agile principles generally adopted only in the software development (this cloud be a topic for a new post). Moreover, it opens up solutions and service levels not affordable in the classical approach unless of huge investments.
Do not make the mistake by simply considering Cloud Computing as just a new technology. It is a new paradigm in the development of infrastructure and services. Using the estabilished approaches and methodologies in the cloud is worthless and, sometime, even counterproductive.

The three key words of Cloud Computing are speed, elasticity and new opportunities.

Have fun!

Further readings:              



Read More »