Beliebte Suchanfragen

Cloud Native

DevOps

IT-Security

Agile Methoden

Java

|
//

Hyperledger Fabric test network on AWS using Ansible

12.8.2018 | 9 minutes of reading time

Prompted by my dissatisfaction with existing cloud-based Hyperledger Fabric solutions, I would like to motivate and explain the automated setup of a Hyperledger Fabric test network in the following article.

Attention: This article is from 2018 and refers to deprecated Fabric Version 1.2 and other deprecated components like Hyperledger Composer. Since this article has been written, Hyperledger has evolved and changed significantly so that information from this article should be handled carefully.

Fabric is currently the most active and widely supported implementation of Hyperledger, an ASF (Apache Software Foundation) project. Version 1.2 was recently released and continues to confirm the great ambitions behind Fabric and its ecosystem. With Fabric, (private/permissioned/consortium) blockchain networks can be designed, operated, and developed. However, Fabric is primarily meant to be the platform for such networks that is used for the configuration, the cryptographic material, and the basic functions of the distributed ledger. It is usually recommended that you use the Hyperledger Composer framework for development, since it greatly simplifies the development and administration of business networks. An example of developing an application with Fabric and Composer was published some time ago: Implementation of a blockchain application with Hyperledger Fabric and Composer .

Distributed ledgers in the cloud – bleeding edge or service wasteland?

Cloud service providers such as AWS (Amazon Web Services) , Azure and IBM Cloud also want to get involved in the private blockchain technology business. They are already publicly offering their first services. To date, simple templates for the respective platforms have been offered, such as Fabric and Ethereum. With these you can often only boot a minimal combination of nodes as Docker containers in a single virtual machine. The setups are therefore only intended for test environments and are still far from a production-ready implementation of the Fabric target architecture. We want to set the scope to a similar setup for this article.

Fortunately, in this article we just want to take care of a Fabric test network. However, we do not use a ready-made template of a cloud provider for this, since its properties are not always satisfactory. Moreover, they can only be extended with the respective template languages if a component is missing in such a setup. For provisioning we prefer to use Ansible, in order to be able to theoretically use any Linux server. As a cloud provider, we use AWS due to familarity. If you are limited to another cloud provider due to some reason, feel free to use it. However, we should try to use Ubuntu 16.04 since this setup has been tested properly with it.

Design of a Hyperledger Fabric test network

To construct Fabric networks, different node types are available, which are intended for different tasks. We need to know peer, orderer and certificate authority nodes. (Non-)Endorsing Peer Nodes are added to the network by each participating organisation and form the main part of the infrastructure to be operated by the organization. This ensures that an organization is technically involved in controlling transactions in the network partly, just like the others. CAs (certificate authorities) offer functionality to create, authenticate and authorize the identities in the network and encapsulate the key management of the participants. In addition to a root CA, each participating organization can also operate its own CA to individually respond to identity requirements for its network actors. The orderer nodes ensure that transactions accepted by the network are arranged correctly and consistently in blocks and are communicated back to the rest of the network. The orderer service therefore forms a critical part of the configurable consensus mechanism.

The minimum development setup in Fabric optimally provides a peer node of an organization, a CA for the organization and an orderer node (in solo mode, not intended for productive use). Including the CouchDB for the peer node, such a network in Docker contains four containers. It’s therefore a good idea to use an existing piece of maintained code from Hyperledger to help us with the heavy lifting. Fabric-dev-servers from the Hyperledger Composer repositories can do most of the job for us. It already offers this setup via Docker-Compose and includes easy-to-use shell scripts for starting, stopping and resetting the network. Fabric-dev-servers is specialized for the development of the application layer with Composer. In addition, an instance of composer-playground and composer-rest-server is useful in the Fabric test network environment to demonstrate applications, integrate them and deliver them to stakeholders. However, composer-rest-servers require a Composer card (access to a business network using an identity of it), which was issued for a specific business network. Therefore, no functioning REST server can be configured without an installed Composer application.

Simplified architecture of the intended solution and components that are important for this setup in the EC2 instance. The rest-server is in brackets because it cannot be part of this setup before deploying a business network.

Designing, explaining and automatically creating a production-ready architecture would go beyond the scope of this article and is planned for a later date.

Let’s get started – Creating, starting and provisioning the network

After the introduction, we can now get started with the real doing. What do we need for this?

  1. An AWS IAM account with sufficient rights to create an EC2 instance with EBS (Elastic Block Storage), a keypair and a custom security group for our instance
  2. A current Ansible installation on the local machine
  3. As well as Git and SSH

Setup of a suitable Ubuntu EC2 instance

First we start a fresh EC2 instance with the Amazon Machine Image (AMI) “Ubuntu Server 16.04 LTS (HVM), SSD Volume Type”. The instance type “t2.medium” should suffice by far. During the AWS setup wizard we create a new keypair with a meaningful name, download it and put it into the path of our choice (e.g. ~/.ssh/keys/my-new-key.pem). We should not forget to restrict the permissions of it with chmod 400 ~/.ssh/keys/my-new-key.pem so that only the current user can read the file.

In order to comfortably connect to the instance we can create a host entry in ~/.ssh/config:

Host my-host
    Hostname (host-of-the-instance).compute.amazonaws.com
    User ec2-user
    IdentityFile ~/.ssh/keys/my-new-key.pem

Then it is possible to establish the SSH connection without specifying all parameters (ssh my-host) and working with Ansible will be easier. When the wizard asks for SSD memory for the instance, we can easily configure 16 GB. With the preset 8 GB we quickly reached the limit in the past.

A new Security Group should also be created for the EC2 instance. We configure the traffic rules and only allow connections of required TCP ports into the instance (“INBOUND”):

  • 7051 for the peer node of org1
  • 7050 for the orderer node
  • 7054 for the certificate authority
  • At least one port for composer-playground as well as composer-rest-server instances that might join the setup later, e.g. 8080 and 3000

Now we can click through to the end of the wizard and let the instance start. At this point we have already created the basis for the Fabric test network on the AWS side. Let’s continue with the provisioning using Ansible.

Provisioning of the Fabric test network with an Ansible Playbook

From now on we want to make changes to the instance as reproducible and versionable as possible via Ansible Playbooks. The advantages of this variant compared to a completely manual installation are obvious. We don’t create another snowflake server that is hard to upgrade, patch and extend. In addition we can save a lot of time and nerves by not installing everything by hand. Luckily I have already created a full Ansible playbook and roles setup in a GitHub repository that defines and implements the tasks for provisioning.

We’re going to clone it into our workspace: git clone git@github.com:jverhoelen/hyperledger-fabric-dev-ansible.git && cd hyperledger-fabric-dev-ansible. It essentially contains the playbook, which processes several smaller modules called roles, as well as configurable variables that are used during the provisioning. A third file is still missing, which we will create next. The missing file is called hosts and can be created in the same folder. Its content is very simple, as we only want to provision one host:

[remote]
my-host # name of the host that we have added to ~/.ssh/config

Before executing the Playbook, we can adjust the standard variables in variables.yml if necessary. Then we can open fire on the instance and watch Ansible (and Cowsay) use the playbook:

ansible-playbook -i ./hosts playbook.yml

At the end we should see a positive summary that confirms the success of the operation.

Local provisioning of Ubuntu 16.04 using Vagrant

If you don’t have an AWS account for a quick test or if you just want to check the setup, we can also do it with a Vagrant VM. In the used repository there is also a Vagrant file defining a simple Ubuntu Virtual Machine. We can start it by executing vagrant up. After the instance has been started, we can generate the SSH host config into our settings using vagrant ssh-config > ~/.ssh/config. Look up the result in your SSH-config to find out the resulting hostname, then you should be able to connect to that instance using for example ssh default.

Summary and Prospects

We have now created a fresh Ubuntu EC2 instance with all necessary requirements for a Docker-based setup of Fabric and Composer. All required containers are running on the instance, composer-cli is installed and the PeerAdmin-Card for the network has been imported into the card list of composer-cli. Also the Ansible playbook downloaded the PeerAdmin connection card to our host machine and copied it to our host machines card store with the external IP configured in variables.yml.
Also, an instance of composer-playground runs on port 8080, which can access the cards on the host (~/.composer/cards) at any time. In the browser you can visit the URI of the playground and operate the network as well as later installed Composer business network definitions.

After installing a Composer application on this setup, Composer Playground offers us to explore and even interactively develop this application.

Since we do not yet run a Composer business network on the Fabric test network with this setup, unfortunately no composer-rest-server can be installed yet, since it is naturally only a permissioned proxy instance to a deployed business network.
Components required for a specific application should be installed via self-written automations, for example via Ansible. Thus the basic provisioning for Fabric and Composer can be taken over from the general playbook and special components can be maintained and versioned in separate use case specific Playbooks. This means, for example, several different REST server installations on already deployed business networks would be provisioned separately.

The aspect of security should be also mentioned regarding this setup. So far, we have implemented no mechanism to protect the resulting environment. If the environment is to be used in the long run and within the framework of a real project, for example, it’s best to use a private network with VPN. If we leave the server open on the internet, potential attackers will be able to take advantage of any security vulnerabilities that might open up.

The next article on Hyperledger Fabric and Composer will discuss how to create a convenient continuous delivery pipeline for our Composer business network definition using GitlabCI pipelines. This pipeline will be based on a running Fabric test network as described in this article.

|

share post

Likes

0

//

More articles in this subject area

Discover exciting further topics and let the codecentric world inspire you.

//

Gemeinsam bessere Projekte umsetzen.

Wir helfen deinem Unternehmen.

Du stehst vor einer großen IT-Herausforderung? Wir sorgen für eine maßgeschneiderte Unterstützung. Informiere dich jetzt.

Hilf uns, noch besser zu werden.

Wir sind immer auf der Suche nach neuen Talenten. Auch für dich ist die passende Stelle dabei.