How to compile and build Gerrit Plugins?

To build Gerrit Plugins from source, you need:

A Linux or macOS system (Windows is not supported at this time)

zip, unzip, wget

$yum install zip -y
$ yum install unzip -y
$ yum install wget -y
$ yum install git -y

Python 2 or 3
This is installed in each RHEL 7 and Ubunutu server by defaul.

Node.js

curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
OR
curl --silent --location https://rpm.nodesource.com/setup_10.x | sudo bash -
sudo yum -y install nodejs

Bazel

## RHEL/CentOS 7 64-Bit ##
$ wget https://copr.fedorainfracloud.org/coprs/vbatts/bazel/repo/epel-7/vbatts-bazel-epel-7.repo
$ cp vbatts-bazel-epel-7.repo /etc/yum.repos.d/
$ yum install -y bazel

How to Installing Bazel on Ubuntu?
https://docs.bazel.build/versions/master/install-ubuntu.html

Maven

$ cd /opt
$ wget http://www-us.apache.org/dist/maven/maven-3/3.5.4/binaries/apache-maven-3.5.4-bin.zip
$ unzip apache-maven-3.5.4-bin.zip
$ mv apache-maven-3.5.4 maven
$ export PATH=$PATH:/op/maven/bin

gcc

$ sudo yum install gcc-c++ make

Now, Bazel in tree driven means it can only be built from within Gerrit tree. Clone or link the plugin into gerrit/plugins directory:

# First become a non-root user

A JDK for Java 8

$ cd
$ wget -c --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz
$ tar -xvf jdk-8u181-linux-x64.tar.gz
$ export JAVA_HOME=/home/ec2-user/jdk1.8.0_181
$ java -version

Follow for Gerrit.war

$ git clone --recursive https://gerrit.googlesource.com/gerrit
$ cd gerrit 
$ bazel build release

Follow for plugins such as its-jira

$ cd plugins
$ git clone https://gerrit.googlesource.com/plugins/its-jira
$ git clone https://gerrit.googlesource.com/plugins/its-base
$ bazel build plugins/its-jira

The output can be normally found in the following directory:

bazel-genfiles/plugins/its-jira/its-jira.jar

# Some plugins describe their build process in src/main/resources/Documentation/build.md file. It may worth checking.

# Some plugins cane be build using maven as well

Reference

  • https://gerrit-review.googlesource.com/Documentation/dev-bazel.html
  • https://gerrit.googlesource.com/gerrit/
  • https://gerrit-review.googlesource.com/Documentation/cmd-plugin-install.html
  • https://gerrit-review.googlesource.com/Documentation/dev-build-plugins.html
Tagged : / / / /

Types of Build

follow are Types of Build…

Distributed processing Build:
Parallel processing build
Multi-platform builds
Multi-language builds
Dedicated builds

build types

Tagged : / / / / / / /

Build Scala Project using sbt and Jenkins

Build Scala Project using sbt and Jenkins

Agenda

  • Scala – A Scalable language
  • Scala Download
  • Scala Software Requirement
  • Scala IDEs
  • Scala Install Configuration
  • Scala First Program
  • Compile and Run
  • Building Scala Projects using Jenkins
  • Sbt downloand and configure
Tagged : / / / /

Build and Release Training online

Upcoming Training Dates | Training Agenda | Training Calender | FAQ | Why scmGalaxy Online Training

Training Duration – 30 Days (90 mins each day)

Mode – Online (Webex | Skype | Gotomeeting)

Email –

Mode of Payment – Online Bank Transfer (Send us an email to info@scmgalaxy.com with the confirmed payment receipt for acknowledgement)

Registration is based on First Come basis and only confirmed registration would be considered.

Course Materials – Would be shared everyday end of the session every day

Lab – 70% of the training consist of lab.

Demo Class –  ScmGlaxy team does not believe in demo class concept as this is very difficult to evaluate any training/trainer very first day in 90 mins. Still, if you want to experience our training before enrollment, we may add you in any on-going live class based on your special request. If you want to know more about us – Click here

Refund – If you are reaching to us that means you have a genuine need of this training, but if you feel that the training does not fit to your expectation level, you may cancel your training within first three days of class and 100% refund will be processed.

What if you miss the scheduled class? – If you miss the scheduled class, you can be a part of other ongoing batches any time in future free of cost.

scmGalaxy Advantage – If you enroll for our courses, you can attend for our training any number of times of that specific course free of cost.

Weekdays Class Timing(Mon-Fri)

CEST

IST

PST

8:00 AM – 9:30 AM 11.30 AM – 01:00 PM 11:00 PM – 12:30 AM
06:30 PM – 8:00 PM 10:00 PM – 11:30 PM 09:30 AM  – 11:00 AM
10:00 PM – 11:30 PM 1:30 AM – 3:00 AM 1:00 PM – 2:30 PM

Weekend Class Timing (Sat – Sun)

11:00 AM – 2:00 PM 2:30 PM – 5:30 PM 2:00 AM – 5:00 AM

Tools Covered as Part of this Training – Jenkins, Git, SVN, Ant, Maven, MSBuild, Chef Fundamental, RPM, Shell Scripting and Linux

Course Outline :

Concept / Process / Principals / Overview

  • Software Configuration Management overview
  • Elements of Software Configuration Management
  • Introduction of Version management / Source Code Management
  • Overview of Build management
  • Overview of Packaging management
  • Overview of Release and Deployment management

Source Code Management Tools

Build Management Tools

Application Packaging Management Tools

  • RPM – A linux based application packaging tool

Deployment Management / Configuration management – Fundamental only

Application server – Fundamental only

  • Jboss – An open source application server

Operating System

  • Windows – A Microsoft operating system
  • Linux – An open source operating system

Scripting

CI/CD Concept and Implementation

Tagged : / / / / / / / /

List of build automation software

Make-based tools

  •     distcc
  •     GNU make, a widely used make implementation with a large set of extensions
  •     make, a classic Unix build tool
  •     mk, developed originally for Version 10 Unix and Plan 9, and ported to Unix as part of plan9port
  •     MPW Make, developed for Mac OS Classic and similar to but not compatible with Unix make; OS X comes with both GNU make and BSD make; available as part of Macintosh Programmer’s Workshop as a free, unsupported download from Apple
  •     nmake
  •     PVCS-make, basically follows the concept of make but with a noticeable set of unique syntax features[1]
  •     Rake, a Ruby-based build tool
  •     ElectricMake, a replacement for make and gmake that implements build parallelization with ElectricAccelerator. Produced by Electric Cloud Inc.

Non-Make-based tools

  •     Apache Ant, popular for Java platform development and uses an XML file format
  •     Apache Buildr, open-source build system, Rake-based, gives full power of scripting in Ruby with integral support for most abilities wanted in a build system
  •     Apache Maven, a Java platform tool for project management and automated software build
  •     A-A-P, a Python based build tool
  •     Cabal, common architecture for building applications and libraries in the programming language Haskell
  •     Flowtracer
  •     Gradle, an open-source build and automation system with a Groovy Rake domain specific language (DSL), combining the advantages of Ant and Apache Maven plus providing many innovative features like a reliable incremental build
  •     Leiningen, a tool providing commonly performed tasks in Clojure projects, including build automation lei
  •     MSBuild, the Microsoft build engine
  •     NAnt, a tool similar to Ant for the .NET Framework
  •     Perforce Jam, a generally enhanced, ground-up tool which is similar to Make
  •     Psake, domain-specific language and build automation tool written in PowerShell
  •     sbt, a build tool built on a Scala-based DSL
  •     SCons, Python-based, with integrated functionality similar to autoconf/automake
  •     Shake, Haskell based, embedded DSL
  •     Tup, Lua based, make-like DSL with a pure focus on speed and scalability
  •     Tweaker, allowing task definitions to be written in any languages (or intermixed languages) while providing a consistent interface for them all
  •     Visual Build, a graphical user interface software for software builds
  •     Waf is a Python-based tool for configuring, compiling and installing applications. It is a replacement for other tools such as Autotools, Scons, CMake or Ant

Build script generation tools

  •     automake
  •     CMake, a cross-platform tool that generates files for the native build environment, such as makefiles for Unix or Workspace files for Visual Studio
  •     GNU Build Tools (aka autotools), a collection of tools for portable builds. These in particular include Autoconf and Automake, cross-platform tools that together generate appropriate localized makefiles.
  •     Generate Your Projects (GYP) – Created for Chromium; it is another tool that generates files for the native build environment
  •     imake
  •     Premake, a Lua based tool for making makefiles, Visual Studio files, Xcode projects, and more
  •     qmake

Continuous integration tools

  •     AnthillPro, build automation with pipeline support for deployment automation and testing. Cross-platform, cross-language
  •     Bamboo, continuous integration software
  •     Automated BuildStudio, a system for automating and managing software build, test and deploy processes, with build scheduling and continuous integration support
  •     Apache Continuum
  •     BuildBot, a Python-based software development continuous integration tool which automates the compile/test cycle
  •     BuildIT, a free graphical build or task tool for Windows with an emphasis on simplicity and ease of use
  •     Buildout, a Python-based build system for creating, assembling and deploying applications from multiple parts
  •     CABIE Continuous Automated Build and Integration Environment, open source, written in Perl
  •     Cascade, a continuous integration tool that builds and tests software components after each change is committed to the repository. Also provides a “checkpointing” facility by which changes can be built and tested before they are committed
  •     CruiseControl, for Java and .NET
  •     FinalBuilder, for Windows software developers. FinalBuilder provides a graphical IDE to create and run build projects
  •     Hudson, an extensible continuous integration engine
  •     Jenkins, an extensible continuous integration engine, forked from Hudson
  •     Team Foundation Server, an extensible continuous integration engine from Microsoft
  •     TeamCity

Configuration management tools

  •     Salt (Python-based)
  •     Ansible
  •     Puppet (Ruby-based)
Tagged : / / / / /

10 Key Suggestions To Build The Document Management System With Your Employees

Employees never feel at ease under a boss who doesn’t trust them or whom they don’t trust. In the absence of mutual trust productivity falls as the employees get into politics, covering their backs and other inefficient activity. Not trusting each other will affect confidence, which leads to a deterioration in customer satisfaction as the environment shifts from the business needs to internal wrangling.

So, let’s look at some key qualities a document management system must possess to develop trust.

Document Management System must communicate well to build strong relationships with their people. In difficult times, employees might think no news is bad news, so the boss must keep in close touch. Lack of communication reduces trust; being open with information creates it. A manager must develop an ability to trust others and create an environment of trust throughout the workplace. Really, it is better to assume the trustworthiness of employees to start with, rather than waiting for them to earn it. Team members find it much easier to trust their manager if they feel trusted themselves.

Being open and honest is a key ingredient for generating well-organized Document Management System. When you are open about your vision, actions and objectives, you will usually generate strong support. Both kind of news should be openly shared, reducing rumor and internal politics. By admitting mistakes and not trying to cover them up, shows any manager to be a normal human being, just like everyone else!

Managers should produce a moral value system for the workplace. Teams which have a common ethics are healthier, resourceful, adaptable and productive owing to the common root of their file value systems.

By making actions visible and delivering the commitments, managers become trusted. Failing on promises is insincere and causes tensions. A manager needs to deliver actions visibly, to ensure everyone knows that they can be depended upon. In the process of building trust, being consistent and predictable is very important.

Employees who you manage using such a document storage system must be able to confide in you the sensitive information, express concern and share issues. People need to know that you can keep this confidential when they need you to. Sometimes these can be personal matters and in such cases this becomes even more significant. Watching your language is crucial. By skipping using the “us” and “them” figures of speech and instead using “we” wherever possible, your team will bond better with you. Your verbal communication should be clear and simple, because everyone interprets what is said differently- so you need to speak clearly for everyone to understand. Having informal social interactions with the staff enhances the trust building procedure. In geranial, social interactions are a big opportunity for success for any good manager.

To make Document Management Systems which work together efficiently, requires the abundant presence of mutual trust. By consistently thinking of and working on earning trust, any manager will reap long-lasting positive benefits.

Tagged : / / / / /

Basic RPM Tutorials

Basic RPM Tutorials

Introduction:

RPM is the RPM Package Manager. It is an open packaging system available for anyone to use. It allows users to take source code for new software and package it into source and binary form such that binaries can be easily installed and tracked and source can be rebuilt easily. It also maintains a database of all packages and their files that can be used for verifying packages and querying for information about files and/or packages.
Red Hat, Inc. encourages other distribution vendors to take the time to look at RPM and use it for their own distributions. RPM is quite flexible and easy to use, though it provides the base for a very extensive system.

RPM Basic usage command
In its simplest form, RPM can be used to install packages:
rpm -i foobar-1.0-1.i386.rpm
The next simplest command is to uninstall a package:

rpm -e foobar

While these are simple commands, rpm can be used in a multitude of ways. To see which options are available in your version of RPM, type:

rpm –help
You can find more details on what those options do in the RPM man page, found by typing:
man rpm

Let’s say you delete some files by accident, but you aren’t sure what you deleted. If you want to verify your entire system and see what might be missing, you would do:

rpm -Va

Let’s say you run across a file that you don’t recognize. To find out which package owns it, you would do:

rpm -qf /usr/X11R6/bin/xjewel

Now you want to see what files the koules RPM installs. You would do:

rpm -qpi koules-1.2-2.i386.rpm

Building RPMs

The basic procedure to build an RPM is as follows:

  • Get the source code you are building the RPM for to build on your system.
  • Make a patch of any changes you had to make to the sources to get them to build properly.
  • Make a spec file for the package.
  • Make sure everything is in its proper place.
  • Build the package using RPM.

The Spec File

Here is a small spec file (eject-2.0.2-1.spec):

Summary: A program that ejects removable media using software control.
Name: eject
Version: 2.0.2
Release: 3
Copyright: GPL
Group: System Environment/Base
Source: http://metalab.unc.edu/pub/Linux/utils/disk-management/eject-2.0.2.tar.gz
Patch: eject-2.0.2-buildroot.patch
BuildRoot: /var/tmp/%{name}-buildroot
%description
The eject program allows the user to eject removable media
(typically CD-ROMs, floppy disks or Iomega Jaz or Zip disks)
using software control. Eject can also control some multi-
disk CD changers and even some devices' auto-eject features.
Install eject if you'd like to eject removable media using
software control.
%prep
%setup -q
%patch -p1 -b .buildroot
%build
make RPM_OPT_FLAGS="$RPM_OPT_FLAGS"

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/bin
mkdir -p $RPM_BUILD_ROOT/usr/man/man1

install -s -m 755 eject $RPM_BUILD_ROOT/usr/bin/eject
install -m 644 eject.1 $RPM_BUILD_ROOT/usr/man/man1/eject.1

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root)
%doc README TODO COPYING ChangeLog

/usr/bin/eject
/usr/man/man1/eject.1

%changelog
* Sun Mar 21 1999 Cristian Gafton <gafton@redhat.com>
- auto rebuild in the new build environment (release 3)

* Wed Feb 24 1999 Preston Brown <pbrown@redhat.com>
- Injected new description and group.

[ Some changelog entries trimmed for brevity.  -Editor. ]
 

The Header
The header has some standard fields in it that you need to fill in. There are a few caveats as well. The fields must be filled in as follows:
The header has some standard fields in it that you need to fill in. There are a few caveats as well. The fields must be filled in as follows:

  • Summary: This is a one line description of the package.
  • Name: This must be the name string from the rpm filename you plan to use.
  • Version: This must be the version string from the rpm filename you plan to use.
  • Release: This is the release number for a package of the same version (ie. if we make a package and find it to be slightly broken and need to make it again, the next package would be release number 2).
  • Copyright: This line tells how a package is copyrighted. You should use something like GPL, BSD, MIT, public domain, distributable, or commercial.
  • Group: This is a group that the package belongs to in a higher level package tool or the Red Hat installer.
  • Source: This line points at the HOME location of the pristine source file. It is used if you ever want to get the source again or check for newer versions. Caveat: The filename in this line MUST match the filename you have on your own system (ie. don’t download the source file and change its name). You can also specify more than one source file using lines like:
Source0: blah-0.tar.gz
Source1: blah-1.tar.gz
Source2: fooblah.tar.gz

These files would go in the SOURCES directory. (The directory structure is discussed in a later section, “The Source Directory Tree”.)
·  Patch: This is the place you can find the patch if you need to download it again. Caveat: The filename here must match the one you use when you make YOUR patch. You may also want to note that you can have multiple patch files much as you can have multiple sources. ] You would have something like:

Patch0: blah-0.patch
Patch1: blah-1.patch
Patch2: fooblah.patch

These files would go in the SOURCES directory.
Group: This line is used to tell high level installation programs (such as Red Hat’s gnorpm) where to place this particular program in its hierarchical structure. You can find the latest description in /usr/doc/rpm*/GROUPS.
·  BuildRoot: This line allows you to specify a directory as the “root” for building and installing the new package. You can use this to help test your package before having it installed on your machine.
·  %description It’s not really a header item, but should be described with the rest of the header. You need one description tag per package and/or subpackage. This is a multi-line field that should be used to give a comprehensive description of the package.

Prep

This is the second section in the spec file. It is used to get the sources ready to build. Here you need to do anything necessary to get the sources patched and setup like they need to be setup to do a make.
One thing to note: Each of these sections is really just a place to execute shell scripts. You could simply make an sh script and put it after the %prep tag to unpack and patch your sources. We have made macros to aid in this, however.
The first of these macros is the %setup macro. In its simplest form (no command line options), it simply unpacks the sources and cd‘s into the source directory. It also takes the following options:

  • -n name will set the name of the build directory to the listed name. The default is $NAME-$VERSION. Other possibilities include $NAME${NAME}${VERSION}, or whatever the main tar file uses. (Please note that these “$” variables are notreal variables available within the spec file. They are really just used here in place of a sample name. You need to use the real name and version in your package, not a variable.)
  • -c will create and cd to the named directory before doing the untar.
  • -b # will untar Source# before cd‘ing into the directory (and this makes no sense with -c so don’t do it). This is only useful with multiple source files.
  • -a # will untar Source# after cd’ing into the directory.
  • -T This option overrides the default action of untarring the Source and requires a -b 0 or -a 0 to get the main source file untarred. You need this when there are secondary sources.
  • -D Do not delete the directory before unpacking. This is only useful where you have more than one setup macro. It should only be used in setup macros after the first one (but never in the first one).

The next of the available macros is the %patch macro. This macro helps automate the process of applying patches to the sources. It takes several options, listed below:

  • # will apply Patch# as the patch file.
  • -p # specifies the number of directories to strip for the patch(1) command.
  • -P The default action is to apply Patch (or Patch0). This flag inhibits the default action and will require a 0 to get the main source file untarred. This option is useful in a second (or later) %patch macro that required a different number than the first macro.
  • You can also do %patch# instead of doing the real command: %patch # -P
  • -b extension will save originals as filename.extension before patching.

That should be all the macros you need. After you have those right, you can also do any other setup you need to do via sh type scripting. Anything you include up until the %build macro (discussed in the next section) is executed via sh. Look at the example above for the types of things you might want to do here.

Build

There aren’t really any macros for this section. You should just put any commands here that you would need to use to build the software once you had untarred the source, patched it, and cd’ed into the directory. This is just another set of commands passed to sh, so any legal sh commands can go here (including comments).
The variable RPM_OPT_FLAGS is set using values in /usr/lib/rpm/rpmrc. Look there to make sure you are using values appropriate for your system (in most cases you are). Or simply don’t use this variable in your spec file. It is optional.

Install

There aren’t really any macros here, either. You basically just want to put whatever commands here that are necessary to install. If you have make install available to you in the package you are building, put that here. If not, you can either patch the makefile for a make install and just do a make install here, or you can hand install them here with sh commands. You can consider your current directory to be the toplevel of the source directory.
The variable RPM_BUILD_ROOT is available to tell you the path set as the Buildroot: in the header. Using build roots are optional but are highly recommended because they keep you from cluttering your system with software that isn’t in your RPM database (building an RPM doesn’t touch your database…you must go install the binary RPM you just built to do that).

Optional pre and post Install/Uninstall Scripts

You can put scripts in that get run before and after the installation and uninstallation of binary packages. A main reason for this is to do things like run ldconfig after installing or removing packages that contain shared libraries. The macros for each of the scripts is as follows:

  • %pre is the macro to do pre-install scripts.
  • %post is the macro to do post-install scripts.
  • %preun is the macro to do pre-uninstall scripts.
  • %postun is the macro to do post-uninstall scripts.

The contents of these sections should just be any sh style script, though you do not need the #!/bin/sh.

Files

This is the section where you must list the files for the binary package. RPM has no way to know what binaries get installed as a result of make install. There is NO way to do this. Some have suggested doing a find before and after the package install. With a multiuser system, this is unacceptable as other files may be created during a package building process that have nothing to do with the package itself.
There are some macros available to do some special things as well. They are listed and described here:

  • %doc is used to mark documentation in the source package that you want installed in a binary install. The documents will be installed in /usr/doc/$NAME-$VERSION-$RELEASE. You can list multiple documents on the command line with this macro, or you can list them all separately using a macro for each of them.
  • %config is used to mark configuration files in a package. This includes files like sendmail.cf, passwd, etc. If you later uninstall a package containing config files, any unchanged files will be removed and any changed files will get moved to their old name with a .rpmsave appended to the filename. You can list multiple files with this macro as well.
  • %dir marks a single directory in a file list to be included as being owned by a package. By default, if you list a directory name WITHOUT a %dir macro, EVERYTHING in that directory is included in the file list and later installed as part of that package.
  • %defattr allows you to set default attributes for files listed after the defattr declaration. The attributes are listed in the form (mode, owner, group) where the mode is the octal number representing the bit pattern for the new permissions (like chmod would use), owner is the username of the owner, and group is the group you would like assigned. You may leave any field to the installed default by simply placing a  in its place, as was done in the mode field for the example package.
  • %files -f <filename> will allow you to list your files in some arbitrary file within the build directory of the sources. This is nice in cases where you have a package that can build it’s own filelist. You then just include that filelist here and you don’t have to specifically list the files.

The biggest caveat in the file list is listing directories. If you list /usr/bin by accident, your binary package will contain every file in /usr/bin on your system.

Building It

The Source Directory Tree

The first thing you need is a properly configured build tree. This is configurable using the /etc/rpmrc file. Most people will just use /usr/src.
You may need to create the following directories to make a build tree:

  • BUILD is the directory where all building occurs by RPM. You don’t have to do your test building anywhere in particular, but this is where RPM will do it’s building.
  • SOURCES is the directory where you should put your original source tar files and your patches. This is where RPM will look by default.
  • SPECS is the directory where all spec files should go.
  • RPMS is where RPM will put all binary RPMs when built.
  • SRPMS is where all source RPMs will be put.

Building the Package with RPM

nce you have a spec file, you are ready to try and build your package. The most useful way to do it is with a command like the following:

rpm -ba foobar-1.0.spec

There are other options useful with the -b switch as well:

  • p means just run the prep section of the specfile.
  • l is a list check that does some checks on %files.
  • c do a prep and compile. This is useful when you are unsure of whether your source will build at all. It seems useless because you might want to just keep playing with the source itself until it builds and then start using RPM, but once you become accustomed to using RPM you will find instances when you will use it.
  • ido a prep, compile, and install.
  • b prep, compile, install, and build a binary package only.
  • abuild it all (both source and binary packages).

There are several modifiers to the -b switch. They are as follows:

  • –short-circuit will skip straight to a specified stage (can only be used with c and i).
  • –clean removes the build tree when done.
  • –keep-temps will keep all the temp files and scripts that were made in /tmp. You can actually see what files were created in /tmp using the -v option.
  • –test does not execute any real stages, but does keep-temp.

Reference:
http://www.ibiblio.org/pub/linux/docs/HOWTO/other-formats/html_single/RPM-HOWTO.html

Tagged : / / / / / / /

Steps for a complete clean build

Following are the steps for a complete clean build:

1.
Build project (compilation)—In the build phase, the build system compiles operating system (OS) component source files and produces libraries. The basic unit of componentization in Windows CE is the library—components are not conditionally compiled. Because of this, components can be mixed and matched without worrying about changes in their behavior.

2.
Link project—During the link phase, the build system attempts to build all target modules. Modules are drivers and executables produced from Windows CE components. In CE 4.0 and later, you can select modules via sysgen environment variables. For example, the “common” project’s modules are listed in CE_MODULES, the DirectX project’s modules are listed in DIRECTX_MODULES, Internet Explorer’s modules are listed in IE_MODULES, and so on.

Microsoft introduced the separation of the build phase and the link phase in Windows CE .NET. Because the operating system was getting more and more complex, linking drivers and other components during the build phase could possibly cause hard-to-diagnose crashes at runtime because coredll entry points that were present during the build phase (which occurs prior to componentization) might not be present in an OEM’s final platform.

3.
Copy/filter project (headers and libraries)—The Copy/Filter phase of system generation is responsible for moving parts of the operating system to the target project’s cesysgen directory. Note: only the components of the OS that the OEM has selected are moved. In addition, header files and various configuration files such as common.bib and common.reg are “filtered” to remove the parts that are unrelated to the OEM’s selected components. The copy/filter is performed at the same time as linking.

4.
Post-process project (miscellaneous post-sysgen cleanup)—The “postproc” sysgen target provides the build system with a mechanism to do some work after most of the system has been generated. Although the post-process phase is important for the small number of OEMs who use it, most developers don’t do much with it.

5.
Platform sysgen—If an OEM wants to write his platform in such a way that it can be used with any selection of OS components, he immediately runs into a problem. Some of the drivers, Control Panel applets, or applications in the platform directory might depend on unselected components. When these source files are built, there are compilation or linker errors because header files or coredll entry points are missing.

The platform sysgen step helps address this problem by checking for a platform sysgen makefile.

6.
Build platform—This phase consists of running a clean build on all projects and building only the projects that sysgen settings specify.

7.
Create release directory—After all the OS components and the platform have been compiled and linked, you need to assemble all the appropriate binaries and configuration files into one place so that you can combine them into a downloadable image. You can use a batch file to perform this step.

8.
Create downloadable image—After you populate the release directory with the appropriate binaries, the next step is to create a binary file that is suitable for flashing or downloading to your device’s RAM. Use the makeimg command for this step.
Tagged : / / / /

Hardware for The Build Lab

The build lab should include some high-end hardware for building the applications. Because the entire team depends on the results of a build, the high-end computers ensure that the build is completed as quickly as possible. Furthermore, you can use high-speed network equipment to push bits around from source control to build machines to release servers.

At a minimum, the build lab should have four machines:

  • Server that contains the Source Code Control program— This is your product. Do you really want this server residing someplace where you have little control over this box?
  • Debug build machine for the mainline builds— If you don’t separate your debug and release machines, you will accidentally ship debug binaries, which is not a good thing.
  • Release build machine for the mainline builds— This is a “golden goose” that turns out the “gold eggs” of your company or group. Treasure this machine like a princess, and guard it like all the king’s fortunes.
  • Internal release share server— This is one more piece of hardware that stores the “bread and butter” of the group or company. Don’t give up control of this hardware to anyone unless your IT department reports through your development group.

Hardware Requirements

Each machine in the preceding list should meet the following requirements:

  • Number of processors— This depends on the build tool you use. One is usually sufficient, because few build tools really take advantage of multiple processors.
  • Processor speed— The lab budget dictates this, but the faster the processor, the better it is.
  • Amount of installed RAM— Max out the machine. RAM is relatively cheap these days, especially when you consider the performance increase you get. Increasing the RAM is usually the first upgrade done when trying to improve the performance of any computer.
  • Number of hard drives— A minimum of two drives (or partitions) is preferred:
    • Drive 1 (C:) is for the operating system and installed applications.
    • Drive 2 (D:) is for building binaries, release shares, or the source database; the minimum space required is roughly ten times the space needed to build your application.
    • The split partitions are good because if you ever need to format or blow away a drive due to corruption, only part of the project will be affected. The recovery is much faster and easier.
  • Hard drive type— This is most likely SCSI, but it could be IDE.
  • Number of power supplies— If you purchase server class hardware (pizza boxes) that belong in racks, you need to consider how many power supplies to order.
  • Motherboard BIOS version— This does make a difference. Make sure you note what is being used and standardize on it.
Tagged : / / / / /

XML Is the Here, the Now, and the Future

XML Is the Here, the Now, and the Future

XML is short for Extensible Markup Language, which is a specification developed by the World Wide Web Consortium (W3C). XML is a pared-down version of SGML, designed especially for Web documents. It allows designers to create their own customized tags, enabling the definition, transmission, validation, and interpretation of data between applications and between organizations.

Following are three good reasons why you should master XML:

  • XML is seen as a universal, open, readable representation for software integration and data exchange. IBM, Microsoft, Oracle, and Sun have built XML into database authoring.
  • .NET and J2EE (Java 2 Platform, Enterprise Edition) depend heavily on XML.
    • All ASP.NET configuration files are based on XML.
    • XML provides serialization or deserialization, sending objects across a network in an understandable format.
    • XML offers SOAP Web Services communication.
    • XML offers temporary data storage.
  • MSBuild and the future project files of Visual Studio will be in XML format. ANT is also XML based.

Thus, if you want to learn one language that will cover many tools and technologies no matter what platform you are working on, that language is XML. The main difference in all these build tools is not so much the feature set but the syntax. I get tired of learning all the quirks of new languages, but I’m happy to learn XML because it’s here to stay and it’s fairly easy to learn.

Tagged : / / / / /