What is Enable signed push support in Gerrit?

This options Defaults to false.

This ensure When a client pushes with git push –signed, this ensures that the push certificate is valid and signed with a valid public key stored in the refs/meta/gpg-keys branch of All-Users.

If true, server-side signed push validation is enabled.

Config in gerrit.config – receive.enableSignedPush

Tagged : / / / / /

How to build when a change is pushed to Bitbucket

bitbucket-tutorials

 

Bitbucket plugin is designed to offer integration between Bitbucket and Jenkins.

 

It exposes a single URI endpoint that you can add as a WebHook within each Bitbucket project you wish to integrate with. This single endpoint receives a full data payload from Bitbucket upon push (see their documentation), triggering compatible jobs to build based on changed repository/branch.

 

Step 1 – Install “Bitbucket Plugin” at your Jenkins

 

Step 2 – Add a normal Post as Hook to your Bitbucket repository (Settings -> Hooks) and use following url:

 

https://YOUR.JENKINS.SERVER:PORT/bitbucket-hook/
and if you have setup authentication on jenkins then URL must be like

 

https://USERNAME:PASSWORD@YOUR.JENKINS.SERVER:PORT/bitbucket-hook/

 

Step 3 – Configure your Jenkins project as follows:

 

Step 4 – Under build trigger enable Build when a change is pushed to BitBucket

 

Step 5 – under Source Code Management select GIT; enter your credentials and define Branches to build (like **feature/*)

 

Note 1 – Make sure to include the slash (‘/’) on the end of the URL or the hook won’t work.

 

Note 2 – Please read the BitBucket Plugin info page as well https://wiki.jenkins.io/display/JENKINS/BitBucket+Plugin

 

Reference 1  
Reference 2 – Login issues with Jenkins url
Tagged : / / / / / /

How to build when a change is pushed to GitHub in Jenkins?

build-when-a-change-is-pushed-to-github-in-jenkins
The GitHub plugin for Jenkins is the most basic plugin for integrating Jenkins with GitHub projects. If you are a GitHub user, this plugin enables you to:
  • Schedule your build
  • Pull your code and data files from your GitHub repository to your Jenkins machine
  • Automatically trigger each build on the Jenkins server, after each Commit on your Git repository

This saves you time and lets you incorporate your project into the Continuous Integration (CI) process.

How to Start Working with the GitHub Plugin for Jenkins
Install the Github Jenkins plugin
Go to “Manage Jenkins” –> “Manage Plugins” –> “Available” Tab –> Search for “GitHub plugin” and install it.
Configure the plugin with github accounts and keys
Go to “Manage Jenkins” –> “Configure System” –> Locate “Github” section and “Add Github Server”.

API URL – If you server is github.com, your “API URL” would be “https://api.github.com” Otherwise if you use GitHub Enterprise, specify its API endpoint here (e.g., https://ghe.acme.com/api/v3/).

Credentials – You can create your own personal access token in your account GitHub settings.
Token should be registered with scopes: Refer https://github.com/settings/tokens/new.
Add credentials (your Github token), Apply and “Test Connection”.

Open your Jenkins Project

a) Check the GitHub project checkbox and set the Project URL to point to your GitHub Repository

b) Under Source Code Management, check Git and set the Repository URL to point to your GitHub Repository
c) Under Build Triggers, check the “Build when a change is pushed to GitHub” checkbox
4. Install the Jenkins (GitHub plugin) and set a webhook to your Jenkins machine
a) From your GitHub repository, go to Settings and then to Integrations & Services
b) Click on Add Service and add ‘Jenkins (GitHub plugin)’

c) Set the Jenkins hook URL as the URL for your Jenkins machine, and add /github-webhook/



Congratulations! Every time you publish your changes to Github, GitHub will trigger your new Jenkins job.
Another Approach noted by suprakash
Actually if you do the web hook settings from Jenkins -> Github plugin configuration (mentioned above), you will still see webhooks get created in github. So , above two approaches basically doing the same thing.
I personally like it to create webhook from Github, because in this way you don’t have to share or store github user info in jenkins.
Steps : 1. Login into Github (with Admin) 2. Go to the repository you want to hook with jenkins 3. Click on settings tab -> webhooks & services 4. Click on Add Webhook 5. Enter payload url : like : http://:8080/github-webhook/ 6. Select content type as json 7. you are done

Now you do the changes and commit , you will see jenkins build get trigger automatically. Don’t forget to do the settings in jenkins jobs to start the build when push code in github.

Reference

Tagged : / / / / / / / /

How to Run/Deploy Java EE applications on Amazon EC2?

running-java-ee-applications-on-amazon-ec2

Running Java EE applications on Amazon EC2: deploying to 20 machines with no money down

Computer hardware has traditionally been a scarce, expensive resource. In the early days of computing developers had to share a single machine. Today each developer usually has their own machine but it’s rare for a developer to have more than one. This means that running performance tests often involves scavenging for machines.  Likewise, replicating even just part of a production environment is a major undertaking. With Amazon’s Elastic Compute Cloud (EC2), however, things are very different. A set of Linux servers is now just a web service call away. Depending on the type of the servers you simply pay 10-80 cents per server per hour for up to 20 servers! No more upfront costs or waiting for machines to be purchased and configured.

To make it easier for enterprise Java developers to use EC2, I have created EC2Deploy.  It’s a Groovy framework for deploying an enterprise Java application on a set of Amazon EC2 servers. EC2Deploy provides a simple, easy to use API for launching a set of EC2 instances; configuring MySQL, Apache and one or more Tomcat servers; and deploying one or more web applications. In addition, it can also run JMeter and collect performance metrics.

Here is an example script that launches some EC2 instances; configures MySQL with one slave, Tomcat and Apache; deploys a single web application on the Tomcat server; and runs a JMeter test with first one thread and then two.

class ClusterTest extends GroovyTestCase {
  void testSomething() {
    AWSProperties awsProperties = new
        AWSProperties("/…/aws.properties")

    def ec2 = new EC2(awsProperties)

    def explodedWar = '…/projecttrack/webapp/target/ptrack'

    ClusterSpec clusterSpec =
       new ClusterSpec()
            .schema("ptrack", ["ptrack": "ptrack"],
                    ["src/test/resources/testdml1.sql",
                     "src/test/resources/testdml2.sql"])
            .slaves(1)
            .tomcats(1)
            .webApp(explodedWar, "ptrack")
            .catalinaOptsBuilder({builder, databasePrivateDnsName ->
                 builder.arg("-Xmx500m")
                 builder.prop("com.sun.management.jmxremote")
                 builder.prop("com.sun.management.jmxremote.port", 8091)
                 builder.prop("com.sun.management.jmxremote.authenticate",
                                     false)
                 builder.prop("com.sun.management.jmxremote.ssl", false)
                 builder.prop("ptrack.application.environment", "ec2")
                 builder.prop("log4j.configuration",
                               "log4j-minimal.properties")
                 builder.prop("jdbc.db.server", databasePrivateDnsName)})

    SimpleCluster cluster = new SimpleCluster(ec2, clusterSpec)

    cluster.loadTest("…/projecttrack/functionalTests/jmeter/SimpleTest.jmx",
        [1, 2])

    cluster.stop()
  }
}

Let’s look at each of the pieces.

First, we need to configure the framework as follows:

    AWSProperties awsProperties = new
        AWSProperties("/…/aws.properties")
    def ec2 = new EC2(awsProperties)

The aws.properties file contains various properties including the Amazon WS security credentials and the EC2 AMI (i.e. OS image) to launch. All servers use my EC2 appliance AMI that has Java, MySQL, Apache, Tomcat, Jmeter and some other useful tools pre-installed.

Next we need to configure the servers:

     ClusterSpec clusterSpec =
        new ClusterSpec()
             .schema("ptrack", ["ptrack": "ptrack"],
                    ["src/test/resources/testdml1.sql",
                     "src/test/resources/testdml2.sql"])
             .slaves(1)
             .tomcats(1)
             .webApp(explodedWar, "ptrack")
             .catalinaOptsBuilder({builder, databasePrivateDnsName ->
                 builder.arg("-Xmx500m")
                 builder.prop("com.sun.management.jmxremote")
                 builder.prop("com.sun.management.jmxremote.port", 8091)
                 builder.prop("com.sun.management.jmxremote.authenticate",
                                     false)
                 builder.prop("com.sun.management.jmxremote.ssl", false)
                 builder.prop("ptrack.application.environment", "ec2")
                 builder.prop("log4j.configuration",
                               "log4j-minimal.properties")
                 builder.prop("jdbc.db.server", databasePrivateDnsName)})

     SimpleCluster cluster = new SimpleCluster(ec2, clusterSpec)

This code first creates a ClusterSpec, which defines the configuration of the machines and the applications:

  • schema() – specifies the name of the database schema to create; names of the users and their passwords; the DML scripts to execute once the database has been create
  • slaves() – specifies how many MySql slaves to create
  • tomcats() – specifies how many Tomcats to run.
  • webApp() – configures a web application. This method takes two parameters: the path to the exploded WAR directory (conveniently created by Maven) and the context to deploy the web application under.
  • catalinaOptsBuilder() – supplies a closure that takes a builder and the DNS name of the MySQL server as arguments and returns the CATALINA_OPTS used to launch Tomcat. It’s primary purpose is to configure the web application(s) to use the correct database server

It then creates a cluster with that specification.

We then start the cluster:

    cluster.start()

At this point EC2Deploy will:

  1. Launch the EC2 instances running my appliance AMI.
  2. Initialize the MySql master database
  3. Create the MySql slave
  4. Create the database schema and the users
  5. Run any DML scripts (these are cached on S3 in a bucket called “tmp–dml” for the reasons described next)
  6. Upload the web applications to Amazon S3 (Simple Storage Service) where they are cached in order to avoid time consuming uploads (over slow DSL connections, for example). EC2Deploy only uploads new and changed files, which means that the bulky 3rd party libraries are only uploaded once. Each web application is stored in an S3 bucket called -tmp-war. If this bucket does not exist you will see some warning messages and the bucket will be created.
  7. Deploy the web applications on each of the Tomcat servers
  8. Configure Apache to load balance across the Tomcat servers

Once the cluster is started we can run a JMeter load test:

    cluster.loadTest("…/projecttrack/functionalTests/jmeter/SimpleTest.jmx", [1, 2])

The first argument specifies the test to run and the second argument is a list of JMeter thread counts. In this example, EC2deploy first runs the load test with one thread and then two threads. For each test run, it generates a report describing CPU utilization for each machine, average response time and throughput.

Finally, we stop the EC2 instances:

cluster.stop()

As you can see, EC2Deploy makes it pretty easy to deploy and test your enterprise Java application. I’ve used it to clone a production environment and run load tests. NOTE 1/28/08: The source code EC2Deploy along with a very cool Maven plugin is now available !

Tagged : / / / / / / / / / / / / / / / / / /