How to compile and build Gerrit Plugins?

To build Gerrit Plugins from source, you need:

A Linux or macOS system (Windows is not supported at this time)

zip, unzip, wget

$yum install zip -y
$ yum install unzip -y
$ yum install wget -y
$ yum install git -y

Python 2 or 3
This is installed in each RHEL 7 and Ubunutu server by defaul.

Node.js

curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
OR
curl --silent --location https://rpm.nodesource.com/setup_10.x | sudo bash -
sudo yum -y install nodejs

Bazel

## RHEL/CentOS 7 64-Bit ##
$ wget https://copr.fedorainfracloud.org/coprs/vbatts/bazel/repo/epel-7/vbatts-bazel-epel-7.repo
$ cp vbatts-bazel-epel-7.repo /etc/yum.repos.d/
$ yum install -y bazel

How to Installing Bazel on Ubuntu?
https://docs.bazel.build/versions/master/install-ubuntu.html

Maven

$ cd /opt
$ wget http://www-us.apache.org/dist/maven/maven-3/3.5.4/binaries/apache-maven-3.5.4-bin.zip
$ unzip apache-maven-3.5.4-bin.zip
$ mv apache-maven-3.5.4 maven
$ export PATH=$PATH:/op/maven/bin

gcc

$ sudo yum install gcc-c++ make

Now, Bazel in tree driven means it can only be built from within Gerrit tree. Clone or link the plugin into gerrit/plugins directory:

# First become a non-root user

A JDK for Java 8

$ cd
$ wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz
$ tar -xvf jdk-8u181-linux-x64.tar.gz
$ export JAVA_HOME=/home/ec2-user/jdk1.8.0_181
$ java -version

Follow for Gerrit.war

$ git clone --recursive https://gerrit.googlesource.com/gerrit
$ cd gerrit 
$ bazel build release

Follow for plugins such as its-jira

$ cd plugins
$ git clone https://gerrit.googlesource.com/plugins/its-jira
$ git clone https://gerrit.googlesource.com/plugins/its-base
$ bazel build plugins/its-jira

The output can be normally found in the following directory:

bazel-genfiles/plugins/its-jira/its-jira.jar

# Some plugins describe their build process in src/main/resources/Documentation/build.md file. It may worth checking.

# Some plugins cane be build using maven as well

Reference

  • https://gerrit-review.googlesource.com/Documentation/dev-bazel.html
  • https://gerrit.googlesource.com/gerrit/
  • https://gerrit-review.googlesource.com/Documentation/cmd-plugin-install.html
  • https://gerrit-review.googlesource.com/Documentation/dev-build-plugins.html
Tagged : / / / /

What is “Install Verified label” in Gerrit?

What is “Install Verified label” in Gerrit?

The Verified label was originally invented by the Android Open Source Project to mean ‘compiles, passes basic unit tests’. Some CI tools expect to use the Verified label to vote on a change after running.

During site initialization the administrator may have chosen to configure the default Verified label for all projects. In case it is desired to configure it at a later time, administrators can do this by adding the following to project.config in All-Projects:

[label “Verified”]
function = MaxWithBlock
value = -1 Fails
value = 0 No score
value = +1 Verified
copyAllScoresIfNoCodeChange = true
The range of values is:

-1 Fails
Tried to compile, but got a compile error, or tried to run tests, but one or more tests did not pass.
Any -1 blocks submit.

0 No score
Didn’t try to perform the verification tasks.

+1 Verified
Compiled (and ran tests) successfully.
Any +1 enables submit.

For a change to be submittable, the change must have a +1 Verified in this label, and no -1 Fails. Thus, -1 Fails can block a submit, while +1 Verified enables a submit.

Additional values could also be added to this label, to allow it to behave more like Code-Review (below). Add -2 and +2 entries to the label.Verified.value fields in project.config to get the same behavior.

As an example, the popular gerrit-trigger plugin for Jenkins/Hudson can set labels at:

  • The start of a build
  • A successful build
  • An unstable build (tests fails)
  • A failed build

Usually the range chosen for this verdict is the Verified label. Depending on the size of your project and discipline of involved developers you might want to limit access right to the +1 Verified label to the CI system only. That way it’s guaranteed that submitted commits always get built and pass tests successfully.

If the build doesn’t complete successfully the CI system can set the Verified label to -1. However that means that a failed build will block submit of the change even if someone else sets Verified +1. Depending on the project and how much the CI system can be trusted for accurate results, a blocking label might not be feasible. A recommended alternative is to set the label Code-review to -1 instead, as it isn’t a blocking label but still shows a red label in the Gerrit UI. Optionally, to enable the possibility to deliver different results (build error vs unstable for instance), it’s also possible to set Code-review +1 as well.

If pushing new changes is granted, it’s possible to automate cherry-pick of submitted changes for upload to other branches under certain conditions. This is probably not the first step of what a project wants to automate however, and so the push right can be found under the optional section.

Suggested access rights to grant, that won’t block changes:
Read on ‘refs/heads/*’ and ‘refs/tags/*’
Label: Code-Review with range ‘-1’ to ‘0’ for ‘refs/heads/*’
Label: Verified with range ‘0’ to ‘+1’ for ‘refs/heads/*’

Optional access rights to grant:
Label: Code-Review with range ‘-1’ to ‘+1’ for ‘refs/heads/*’
Push to ‘refs/for/refs/heads/*’

Reference
https://gerrit-review.googlesource.com/Documentation/config-labels.html#label_Verified
https://gerrit-review.googlesource.com/Documentation/access-control.html#examples_cisystem
https://groups.google.com/forum/#!topic/repo-discuss/FdN29piSmEQ

Tagged : /

What is Enable signed push support in Gerrit?

This options Defaults to false.

This ensure When a client pushes with git push –signed, this ensures that the push certificate is valid and signed with a valid public key stored in the refs/meta/gpg-keys branch of All-Users.

If true, server-side signed push validation is enabled.

Config in gerrit.config – receive.enableSignedPush

Tagged : / / / / /

Importannce of Canonical web url in Gerrit

The canonical web url must be set. Optional base URL for repositories available over the anonymous git protocol. For example, set this to git://mirror.example.com/base/ to have Gerrit display patch set download URLs in the UI. Gerrit automatically appends the project name onto the end of the URL.

By default unset, as the git daemon must be configured externally by the system administrator, and might not even be running on the same host as Gerrit.

Tagged : /

How to backup and restore Gerrit server?

How to backup and restore gerrit server?

There are 3 coponent which should be backed up in gerrit

  1. Repository – According to me best way to backup the repository is to setup a replication with other gerrit hosting tools using gerrit replication plugins. The steps can be find as below;
  2. Gerrit Database
    Depends on the database, you should take the database backup. It can be H2 or mysql….
  3. Gerrit Config
    Rysnc is the best tools to take the entire gerrit site backup.

How to replicate Gerrit repository using replication plugins?

Step 1- Setup Gerrit Server
http://www.devopsschool.com/tutorial/gerrit/gerrit-install-and-configuration.html

Step 2 – Create a Project in Gerrit

Step 3 – Setup a Developement Machine
git clone http://admin@35.154.81.167:8080/a/prj1 && (cd prj1 && curl -kLo `git rev-parse –git-dir`/hooks/commit-msg http://admin@35.154.81.167:8080/tools/hooks/commit-msg; chmod +x `git rev-parse –git-dir`/hooks/commit-msg)

Step 4: Sample Commits to be done
> touch file1.txt;git add .;git commit -m”adding first version”

Step 5: Sample push and submit it
> git push origin HEAD:refs/for/master

Step 6: create it $site_path/etc/replication.config

Content of the files is –
[remote “github”]
url = git@github.com:scmgalaxy/${name}.git

Within each URL value the magic placeholder `${name}` is replaced with the Gerrit project name.

Step 7: Generate a public/private key

> ssh-keygen -t rsa

Step 8: create a “config” under /root/.ssh

Host github.com
User git
IdentityFile /root/.ssh/id_rsa
StrictHostKeyChecking no
UserKnownHostsFile /dev/null

Step 9: Update the public key to github

Step 10: Create a repo in Github.com with same name.

Tagged : / / /

How Gerrit Works?

When Gerrit is configured as the central source repository, all code changes are sent to Pending Changes for others to review and discuss. When enough reviewers have approved a code change, you can submit the change to the code base.

In addition to the store of Pending Changes, Gerrit captures notes and comments made about each change. This enables you to review changes at your convenience or when a conversation about a change can’t happen in person. In addition, notes and comments provide a history of each change (what was changed and why and who reviewed the change).

Gerrit project is a workspace consisting of the following elements:

  • Git repository: It is used to store the merged code base and the changes under review that have not being merged yet. Gerrit has the limitation of a single repository per project. There can also be projects without any code repository associated at all (that is, Security-only projects)
  • Changes references under review: Git commit-id (expressed as SHA-1 Hexadecimal alphanumeric string) stored in the Gerrit DB and pointing to the corresponding changes stored in the Git repository. A Gerrit change is a Git commit object uploaded for review and associated to its comments and scores. It is stored in the project’s Git repository but it is not visible/accessible from the normal Git graph of commits, even it does start from a point on the commits graph.
  • Access Control Lists (ACLs): It contains the list of roles defined for the Gerrit project and the associated access permissions to the Git repository branches.
  • Prolog rules: It is the set of rules that govern the Code Review process for the project. 0 Additional metadata: All the extra settings such as description, merge strategy, contributor agreements, and accessory metadata needed in order to manage the project.

We have to make and review a change through these stages in Gerrit:

  1. Making the change.
  2. Creating the review.
  3. Reviewing the change.
  4. Reworking the change.
  5. Verifying the change.
  6. Submitting the change.

Tagged : /

How to setup Kubernetes Dashboard in EKS using NodePort?

How to setup Kubernetes Dashboard in EKS using NodePort?

Step 1: Deploy the Dashboard
# Deploy the Kubernetes dashboard to your cluster:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

# Deploy heapster to enable container cluster monitoring and performance analysis on your cluster:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml

# Deploy the influxdb backend for heapster to your cluster:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml

# Create the heapster cluster role binding for the dashboard:

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml

Step 2: Create an eks-admin Service Account and Cluster Role Binding
# Create a file called eks-admin-service-account.yaml with the text below:

vi eks-admin-service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: eks-admin
  namespace: kube-system

$ kubectl apply -f eks-admin-service-account.yaml

vi eks-admin-cluster-role-binding.yaml

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: eks-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: eks-admin
  namespace: kube-system

$ kubectl apply -f eks-admin-cluster-role-binding.yaml

Step 3: Retrieve an authentication token 
Retrieve an authentication token for the eks-admin service account. Copy the <authentication_token> value from the output. You use this token to connect to the dashboard.

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')

Step 4: Connect to the Dashboard Via Node Port
Retrieve an authentication token for the eks-admin service account. Copy the <authentication_token> value from the output. You use this token to connect to the dashboard.

$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')
$ kubectl get pods --all-namespaces
$ kubectl get pods --namespace=kube-system
$ kubectl get svc --all-namespaces

$ kubectl edit svc/kubernetes-dashboard --namespace=kube-system
or
$ kubectl -n kube-system edit service kubernetes-dashboard
Just change "type: NodePort" only

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2018-07-27T10:22:50Z
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "3288196"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 03f2f747-9187-11e8-9432-02b761c0deac
spec:
  clusterIP: 10.100.194.75
  externalTrafficPolicy: Cluster
  ports:
  - nodePort: 30530
    port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

$ kubectl describe pods kubernetes-dashboard-7d5dcdb6d9-mt9b9 --namespace=kube-system 
#Find which node is running and get a Port of SVC

$ kubectl get svc --all-namespaces
$ kubectl get pods --all-namespaces
$ kubectl describe pods kubernetes-dashboard-7d5dcdb6d9-h9dcb --namespace=kube-system

Step 5: Connect to the Dashboard Via ClustorIP and Proxy
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

https://10.16.39.28:30178/

 

IMPORTANT – Kubernetes Dashboard should be HTTPS

 

Tagged : / / / /

What is Annotations in Kubernetes?

What is Annotations in Kubernetes?

There are two way using you can attach metadata to Kubernetes objects.

  1. labels
  2. annotations

Kubernetes annotations is used to attach arbitrary non-identifying metadata to objects. Clients such as tools and libraries can retrieve this metadata.

In contrast, annotations are not used to identify and select objects. The metadata in an annotation can be small or large, structured or unstructured, and can include characters not permitted by labels.

Annotations, like labels, are key/value maps:

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  annotations:

    kompose.cmd: ./kompose convert

    kompose.version: “”

FORMAT

"metadata": {</p>
<p class="p1"><span class="Apple-converted-space">  </span>"annotations": {</p>
<p class="p1"><span class="Apple-converted-space">    </span>"key1" : "value1",</p>
<p class="p1"><span class="Apple-converted-space">    </span>"key2" : "value2"</p>
<p class="p1"><span class="Apple-converted-space">  </span>}</p>
<p class="p1">}</p>

Here are some examples of information that could be recorded in annotations:

  1. Build, release, or image information like timestamps, release IDs, git branch, PR numbers, image hashes, and registry address.
  2. Pointers to logging, monitoring, analytics, or audit repositories.

More

https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/

Tagged : / /

What is EKS and How EKS can help you?

In simple line, Amazon EKS is generally available, bringing fully-managed Kubernetes to AWS.

Amazon launched Amazon EKS in November at its re:Invent 2017 conference. Till the time this article is written, Amazon EKS is available in the US East (N. Virginia) and US West (Oregon) Regions only.

The cost of running EKS is $0.20 per hour for the EKS Control Plane, apart from EC2, EBS, and Load Balancing prices for resources that run in your account.

How does Amazon EKS work?

Amazon EKS works by provisioning (starting) and managing the Kubernetes control plane for you. At a high level, Kubernetes consists of two major components – a cluster of ‘worker nodes’ that run your containers and the control plane that manages when and where containers are started on your cluster and monitors their status.

Without Amazon EKS, you have to run both the Kubernetes control plane and the cluster of worker nodes yourself. With Amazon EKS, you provision your cluster of worker nodes using the provided Amazon Machine Image (AMI) and AWS CloudFormation script and AWS handles provisioning, scaling, and managing the Kubernetes control plane in a highly available and secure configuration. This removes a significant operational burden for running Kubernetes and allows you to focus on building your application instead of managing AWS infrastructure.

Major Features of Amazon Elastic Container Service for Kubernetes (EKS)
Amazon Elastic Container Service for Kubernetes (EKS) is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane.

  1. Availability and Scalability of Nodes – Amazon EKS automatically manages the availability and scalability of the Kubernetes control plane nodes that are responsible for
    1. Starting and stopping containers,
    2. Scheduling containers on virtual machines,
    3. Storing cluster data, and other tasks.
  2. Health check of Nodes – Amazon EKS Automatically detects and replaces unhealthy control plane nodes for each cluster.
  3. Amazon EKS Integration – Great Integration with AWS networking and security services, such as Application Load Balancers for load distribution, IAM for role based access control, and VPC for pod networking.
  4. Managed Kubernetes Control Plane – Amazon EKS provides a scalable and highly-available control plane that runs across multiple AWS availability zones.
  5. Kubernetes Masters in three Availability Zones – Amazon EKS runs the Kubernetes control plane across three Availability Zones in order to ensure high availability, and it automatically detects and replaces unhealthy masters.
  6. Amazon EKS with IAM Authentication – Amazon EKS integrates Kubernetes RBAC (the native role based access control system for Kubernetes) with IAM authentication through a collaboration with Heptio. You can assign RBAC roles directly to each IAM entity allowing you to granularly control access permissions to your Kubernetes masters.
  7. Amazon EKS with VPC Support
    Your EKS clusters run in an Amazon VPC, allowing you to use your own VPC security groups and network ACLs. No compute resources are shared with other customers. This provides you a high level of isolation and helps you use Amazon EKS to build highly secure and reliable applications.
  8. Container Interface – EKS uses the Amazon VPC CNI to allow Kubernetes pods to receive IP addresses from the VPC means The Container Network Interface for Kubernetes uses Elastic Network Interfaces to provide secondary IP addresses for Kubernetes Pods.
  9. Amazon EKS Logging
    Amazon EKS is integrated with AWS CloudTrail to provide visibility and audit history of your cluster and user activity. You can use CloudTrail to view API calls to the Amazon EKS API.
  10. Amazon EKS with EBS – Kubernetes PersistentVolumes (used for cluster storage) are implemented as Amazon Elastic Block Store (EBS) volumes.
  11. Amazon EKS with Route 53 – The External DNS project allows services in Kubernetes clusters to be accessed via Route 53 DNS records. This simplifies service discovery and supports load balancing.
  12. Amazon EKS Support – Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use all the existing plugins and tooling from the Kubernetes community.

Reference
EKS Getting Started Guide
EKS Publication
EKS FAQ

Tagged : / / / / / /