10 Key Suggestions To Build The Document Management System With Your Employees

Employees never feel at ease under a boss who doesn’t trust them or whom they don’t trust. In the absence of mutual trust productivity falls as the employees get into politics, covering their backs and other inefficient activity. Not trusting each other will affect confidence, which leads to a deterioration in customer satisfaction as the environment shifts from the business needs to internal wrangling.

So, let’s look at some key qualities a document management system must possess to develop trust.

Document Management System must communicate well to build strong relationships with their people. In difficult times, employees might think no news is bad news, so the boss must keep in close touch. Lack of communication reduces trust; being open with information creates it. A manager must develop an ability to trust others and create an environment of trust throughout the workplace. Really, it is better to assume the trustworthiness of employees to start with, rather than waiting for them to earn it. Team members find it much easier to trust their manager if they feel trusted themselves.

Being open and honest is a key ingredient for generating well-organized Document Management System. When you are open about your vision, actions and objectives, you will usually generate strong support. Both kind of news should be openly shared, reducing rumor and internal politics. By admitting mistakes and not trying to cover them up, shows any manager to be a normal human being, just like everyone else!

Managers should produce a moral value system for the workplace. Teams which have a common ethics are healthier, resourceful, adaptable and productive owing to the common root of their file value systems.

By making actions visible and delivering the commitments, managers become trusted. Failing on promises is insincere and causes tensions. A manager needs to deliver actions visibly, to ensure everyone knows that they can be depended upon. In the process of building trust, being consistent and predictable is very important.

Employees who you manage using such a document storage system must be able to confide in you the sensitive information, express concern and share issues. People need to know that you can keep this confidential when they need you to. Sometimes these can be personal matters and in such cases this becomes even more significant. Watching your language is crucial. By skipping using the “us” and “them” figures of speech and instead using “we” wherever possible, your team will bond better with you. Your verbal communication should be clear and simple, because everyone interprets what is said differently- so you need to speak clearly for everyone to understand. Having informal social interactions with the staff enhances the trust building procedure. In geranial, social interactions are a big opportunity for success for any good manager.

To make Document Management Systems which work together efficiently, requires the abundant presence of mutual trust. By consistently thinking of and working on earning trust, any manager will reap long-lasting positive benefits.

Tagged : / / / / /

Tools for Counting Lines of Code in Source Code

USC CodeCount and USC COCOMO- $0

CodeCount automates the collection of source code sizing information. The CodeCount toolset utilizes one of two possible source lines of code (SLOC) definitions, physical or logical. COCOMO (COnstructive COst MOdel), is a tool which allows one to estimate the cost, effort, and schedule associated with a prospective software development project.

Languages: Ada, Assembly, C, C++, COBOL, FORTRAN, Java, JOVIAL, Pascal, PL1

SLOCCount – $0

SLOCCount is a set of tools for counting physical Source Lines of Code (SLOC) in a large number of languages of a potentially large set of programs. SLOCCount can automatically identify and measure many programming languages.

Languages: Ada, Assembly, awk, Bourne shell and variants, C, C++, C shell, COBOL, C#, Expect, Fortran, Haskell, Java, lex/flex, LISP/Scheme, Makefile, Modula-3, Objective-C, Pascal, Perl, PHP, Python, Ruby, sed, SQL, TCL, and Yacc/Bison.

SourceMonitor – $0

SourceMonitor lets you see inside your software source code to find out how much code you have and to identify the relative complexity of your modules. For example, you can use SourceMonitor to identify the code that is most likely to contain defects and thus warrants formal review. Collects metrics in a fast, single pass through source files. Displays and prints metrics in tables and charts.

Languages: C++, C, C#, Java, Delphi, Visual Basic (VB6) or HTML

LOCC – $0

LOCC is an extensible system for producing hierarchical, incremental measurements of work product size that are useful for estimation, planning, and other software engineering activities. LOCC supports size measurement of grammar-based languages through integrated support for JavaCC. LOCC produces size data corresponding to the number of packages, the number of classes in each package, the number of methods in each class, and the number of lines of code in each method.

Languages: C++, Java

Code Counter Pro – $25

Code Counter Pro is perfect for those reports you need to send to your boss – count up all your progamming lines (SLOC, KLOC) automatically, find out your team’s productivity, use as handy help for measuring Function Points through Backfiring, measure comment percentages and more.

Languages: ASM, COBOL, C, C++, C#, Fortran, Java, JSP, PHP, HTML, Delphi, Pascal, VB, XML

SLOC Metrics – $99

SLOC Metrics measures the size of your source code based on the Physical Source Lines of Code metric recommended by the Software Engineering Institute at Carnegie Mellon University (CMU/SEI-92-TR-019). Specifically, the source lines that are included in the count are the lines that contain executable statements, declarations, and/or compiler directives. Comments, and blank lines are excluded from the count. When a line or statement contains more than one type, it is classified as the type with the highest precedence. The order of precedence for the types is: executable, declaration, compiler directive, comment and lastly, white space.

Languages: ASP, C, C++, C#, Java, HTML, Perl, Visual Basic

Resource Standard Metrics – $200

Resource Standard Metrics, or RSM, is a source code metrics and quality analysis tool unlike any other on the market. The unique ability of RSM to support virtually any operating system provides your enterprise with the ability to standardize the measurement of source code quality and metrics throughout your organization. RSM provides the fastest, most flexible and easy-to-use tool to assist in the measurement of code quality and metrics.

Languages: C, C++, C#, Java

EZ-Metrix – $495

EZ-Metrix supports software development estimates, productivity measurement, schedule forecasting and quality analysis. With an easy Internet-based interface, multiple language support and flexible licensing features, you will be up and running in minutes with EZ-Metrix. Measure source code size from virtually all text-based languages and from any platform or operating system with the same utility. Size data may be stored in EZ-Metrix’s internal database or may be exported for further analysis.

Languages: Ada, ALGOL, antlr, asp, Assembly, awk, bash, BASIC, bison, C, C#, C++, ColdFusion, Delphi, Forth, FORTRAN, Haskell, HTML, Java, Javascript, JOVIAL, jsp, lex, lisp, Makefile, MUMPS, Pascal, Perl, PHP, PL/SQL, PL1, PowerBuilder, ps, Python, Ruby, sdl, sed, SGML, shell, SQL, Visual Basic, XML, Yacc

McCabe IQ – $ unknown

McCabe IQ enables you to deliver better, more reliable software to your end-users, and is known worldwide as the gold standard for the analysis, comprehension, testing, and reengineering of new software and legacy systems. McCabe IQ uses advanced software metrics to identify, objectively measure, and report on the complexity and quality of your code at the application and enterprise level.

Languages: Ada, ASM86, C, C#, C++.NET, C++, COBOL, FORTRAN, Java, JSP, Perl, PL1, VB, VB.NET

Tagged : / / / /

Sonar Support with JSP & HTML

JSP/HTML land, usefull tests could be done via some regexp, ie check if style/css are used (to avoid dirty colors/fonts hard-coded for example).

If we want to build something pretty robust and extensible, I think we should integrate a java library which is able to transform a XHTML or badly formatted HTML document into a DOM :

http://htmlparser.sourceforge.net/
http://jtidy.sourceforge.net/
http://sourceforge.net/projects/nekohtml/

a complete list of available libraries is available here : http://java-source.net/open-source/html-parsers

With a DOM we could then imagine to implement a visitor pattern in order to let users create new rules.

Some very simple rules in order to start.
Rule 1: disallow scriptlets
Rule 2: disallow some taglibs (JSTL SQL comes to mind). Could be parametrized by Taglib URL to list all disallowed taglibs.
Rule 3: enforce JSP style (XML syntax)
Rule 4: disallow hard coded labels
Rule 5: disallow dynamic JSP includes (<jsp:include>)
Rule 6: disallow external file in page attribute of dynamic JSP include
Rule 7: disallow TLD location for URI in taglib declaration
For HTML
Rule 8: enforce <script> at the end of the body
Rule 9: disallow <style>
Rule 10: disallow non empty <script> content
Rule 11: enforce a limit on the number of called external files (js and css)

Tagged : / / /

What is HTTP (HyperText Transfer Protocol)

Short for HyperText Transfer Protocol, the underlying protocol  used by the World Wide Web. HTTP defines how messages are formatted and transmitted, and what actions Web servers and browsers  should take in response to various commands. For example, when you enter a URL in your browser, this actually sends an HTTP command to the Web server directing it to fetch and transmit the requested Web page.

The other main standard that controls how the World Wide Web works is HTML, which covers how Web pages are formatted and displayed.

HTTP is called a stateless protocol because each command is executed independently, without any knowledge of the commands that came before it.

Tagged : / / / /

Berkeley Internet Name Domain(BIND)

Abbreviated as BINDBerkeley Internet Name Domain  is the most common implementation of the DNS protocol on the Internet. It’s freely available under the BSD License. BIND DNS servers are believed to be providing about 80 percent of all DNS services. BIND was developed by the University of California at Berkeley. The most current release is BIND 9.4.2, and from Version 9 on wards it supports DNS SEC, TSIG, IPv6 and other DNS protocol enhancements.

Tagged : / / / /

What is DNS (Domain Name System)

Short for Domain Name System (or Service  or Server), an Internet service that translates domain names  into IP addresses. Because domain names are alphabetic, they’re easier to remember. The Internet however, is really based on IP addresses. Every time you use a domain name, therefore, a DNS service must translate the name into the corresponding IP address. For example, the domain name www.example.com might translate to 198.105.232.4.

The DNS system is, in fact, its own network. If one DNS server doesn’t know how to translate a particular domain name, it asks another one, and so on, until the correct IP address is returned.

Tagged : / / / / /

What is SSH(Secure Shell)

SSH Secure Shell provides users with a secure, encrypted mechanism to log into systems and transfer files; it can be viewed as a secure replacement for FTP.

Developed by SSH Communications Security Ltd., Secure Shell is a program to log into another computer over a network, to execute commands in a remote  machine, and to move files from one machine to another. It provides strong authentication  and secure communications over insecure channels. It is a replacement for rlogin, rsh, rcp, and rdist.

SSH protects a network from attacks such as IP spoofing, IP source routing, and DNS spoofing. An attacker who has managed to take over a network
can only force ssh to disconnect. He or she cannot play back the traffic or hijack the connection when encryption is enabled.

When using ssh’s slogin (instead of rlogin) the entire login session, including transmission of password, is encrypted; therefore it is almost impossible for an outsider to collect passwords.

Tagged : / / / / /

SNMP is the Simple Network Management Protocol

The SNMP protocol is used by network management systems to communicate with network elements.For this to work, the network element must be equipped with an SNMP agent.

Most professional-grade network hardware comes with an SNMP agent built in. These agents must be enabled and configured to communicate with the network management system.Operating systems, such as Unix and Windows, can also be configured with SNMP agents.

Simple Network Management Protocol (SNMP) is a UDP-based network protocol. It is used mostly in network management systems to monitor network-attached devices for conditions that warrant administrative attention.

Tagged : / / / / / / / /

What is Network File System (NFS)

Network File System (NFS) is a network file system protocol originally developed by Sun Microsystems in 1984, allowing a user on a client computer  to access files over a network in a manner similar to how local storage is accessed. NFS, like many other protocols, builds on the Open Network Computing Remote Procedure Call (ONC RPC) system. The Network File System is an open standard defined in RFCs, allowing anyone to implement the protocol.

The NFS protocol is designed to be independent of the computer, operating system, network architecture, and transport protocol. This means that systems using the NFS service may be manufactured by different vendors, use different operating systems, and be connected to networks with different architectures. These differences are transparent to the NFS application, and thus, the user.

Tagged : / / / / /

Basic RPM Tutorials

Basic RPM Tutorials

Introduction:

RPM is the RPM Package Manager. It is an open packaging system available for anyone to use. It allows users to take source code for new software and package it into source and binary form such that binaries can be easily installed and tracked and source can be rebuilt easily. It also maintains a database of all packages and their files that can be used for verifying packages and querying for information about files and/or packages.
Red Hat, Inc. encourages other distribution vendors to take the time to look at RPM and use it for their own distributions. RPM is quite flexible and easy to use, though it provides the base for a very extensive system.

RPM Basic usage command
In its simplest form, RPM can be used to install packages:
rpm -i foobar-1.0-1.i386.rpm
The next simplest command is to uninstall a package:

rpm -e foobar

While these are simple commands, rpm can be used in a multitude of ways. To see which options are available in your version of RPM, type:

rpm –help
You can find more details on what those options do in the RPM man page, found by typing:
man rpm

Let’s say you delete some files by accident, but you aren’t sure what you deleted. If you want to verify your entire system and see what might be missing, you would do:

rpm -Va

Let’s say you run across a file that you don’t recognize. To find out which package owns it, you would do:

rpm -qf /usr/X11R6/bin/xjewel

Now you want to see what files the koules RPM installs. You would do:

rpm -qpi koules-1.2-2.i386.rpm

Building RPMs

The basic procedure to build an RPM is as follows:

  • Get the source code you are building the RPM for to build on your system.
  • Make a patch of any changes you had to make to the sources to get them to build properly.
  • Make a spec file for the package.
  • Make sure everything is in its proper place.
  • Build the package using RPM.

The Spec File

Here is a small spec file (eject-2.0.2-1.spec):

Summary: A program that ejects removable media using software control.
Name: eject
Version: 2.0.2
Release: 3
Copyright: GPL
Group: System Environment/Base
Source: http://metalab.unc.edu/pub/Linux/utils/disk-management/eject-2.0.2.tar.gz
Patch: eject-2.0.2-buildroot.patch
BuildRoot: /var/tmp/%{name}-buildroot
%description
The eject program allows the user to eject removable media
(typically CD-ROMs, floppy disks or Iomega Jaz or Zip disks)
using software control. Eject can also control some multi-
disk CD changers and even some devices' auto-eject features.
Install eject if you'd like to eject removable media using
software control.
%prep
%setup -q
%patch -p1 -b .buildroot
%build
make RPM_OPT_FLAGS="$RPM_OPT_FLAGS"

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/bin
mkdir -p $RPM_BUILD_ROOT/usr/man/man1

install -s -m 755 eject $RPM_BUILD_ROOT/usr/bin/eject
install -m 644 eject.1 $RPM_BUILD_ROOT/usr/man/man1/eject.1

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root)
%doc README TODO COPYING ChangeLog

/usr/bin/eject
/usr/man/man1/eject.1

%changelog
* Sun Mar 21 1999 Cristian Gafton <gafton@redhat.com>
- auto rebuild in the new build environment (release 3)

* Wed Feb 24 1999 Preston Brown <pbrown@redhat.com>
- Injected new description and group.

[ Some changelog entries trimmed for brevity.  -Editor. ]
 

The Header
The header has some standard fields in it that you need to fill in. There are a few caveats as well. The fields must be filled in as follows:
The header has some standard fields in it that you need to fill in. There are a few caveats as well. The fields must be filled in as follows:

  • Summary: This is a one line description of the package.
  • Name: This must be the name string from the rpm filename you plan to use.
  • Version: This must be the version string from the rpm filename you plan to use.
  • Release: This is the release number for a package of the same version (ie. if we make a package and find it to be slightly broken and need to make it again, the next package would be release number 2).
  • Copyright: This line tells how a package is copyrighted. You should use something like GPL, BSD, MIT, public domain, distributable, or commercial.
  • Group: This is a group that the package belongs to in a higher level package tool or the Red Hat installer.
  • Source: This line points at the HOME location of the pristine source file. It is used if you ever want to get the source again or check for newer versions. Caveat: The filename in this line MUST match the filename you have on your own system (ie. don’t download the source file and change its name). You can also specify more than one source file using lines like:
Source0: blah-0.tar.gz
Source1: blah-1.tar.gz
Source2: fooblah.tar.gz

These files would go in the SOURCES directory. (The directory structure is discussed in a later section, “The Source Directory Tree”.)
·  Patch: This is the place you can find the patch if you need to download it again. Caveat: The filename here must match the one you use when you make YOUR patch. You may also want to note that you can have multiple patch files much as you can have multiple sources. ] You would have something like:

Patch0: blah-0.patch
Patch1: blah-1.patch
Patch2: fooblah.patch

These files would go in the SOURCES directory.
Group: This line is used to tell high level installation programs (such as Red Hat’s gnorpm) where to place this particular program in its hierarchical structure. You can find the latest description in /usr/doc/rpm*/GROUPS.
·  BuildRoot: This line allows you to specify a directory as the “root” for building and installing the new package. You can use this to help test your package before having it installed on your machine.
·  %description It’s not really a header item, but should be described with the rest of the header. You need one description tag per package and/or subpackage. This is a multi-line field that should be used to give a comprehensive description of the package.

Prep

This is the second section in the spec file. It is used to get the sources ready to build. Here you need to do anything necessary to get the sources patched and setup like they need to be setup to do a make.
One thing to note: Each of these sections is really just a place to execute shell scripts. You could simply make an sh script and put it after the %prep tag to unpack and patch your sources. We have made macros to aid in this, however.
The first of these macros is the %setup macro. In its simplest form (no command line options), it simply unpacks the sources and cd‘s into the source directory. It also takes the following options:

  • -n name will set the name of the build directory to the listed name. The default is $NAME-$VERSION. Other possibilities include $NAME${NAME}${VERSION}, or whatever the main tar file uses. (Please note that these “$” variables are notreal variables available within the spec file. They are really just used here in place of a sample name. You need to use the real name and version in your package, not a variable.)
  • -c will create and cd to the named directory before doing the untar.
  • -b # will untar Source# before cd‘ing into the directory (and this makes no sense with -c so don’t do it). This is only useful with multiple source files.
  • -a # will untar Source# after cd’ing into the directory.
  • -T This option overrides the default action of untarring the Source and requires a -b 0 or -a 0 to get the main source file untarred. You need this when there are secondary sources.
  • -D Do not delete the directory before unpacking. This is only useful where you have more than one setup macro. It should only be used in setup macros after the first one (but never in the first one).

The next of the available macros is the %patch macro. This macro helps automate the process of applying patches to the sources. It takes several options, listed below:

  • # will apply Patch# as the patch file.
  • -p # specifies the number of directories to strip for the patch(1) command.
  • -P The default action is to apply Patch (or Patch0). This flag inhibits the default action and will require a 0 to get the main source file untarred. This option is useful in a second (or later) %patch macro that required a different number than the first macro.
  • You can also do %patch# instead of doing the real command: %patch # -P
  • -b extension will save originals as filename.extension before patching.

That should be all the macros you need. After you have those right, you can also do any other setup you need to do via sh type scripting. Anything you include up until the %build macro (discussed in the next section) is executed via sh. Look at the example above for the types of things you might want to do here.

Build

There aren’t really any macros for this section. You should just put any commands here that you would need to use to build the software once you had untarred the source, patched it, and cd’ed into the directory. This is just another set of commands passed to sh, so any legal sh commands can go here (including comments).
The variable RPM_OPT_FLAGS is set using values in /usr/lib/rpm/rpmrc. Look there to make sure you are using values appropriate for your system (in most cases you are). Or simply don’t use this variable in your spec file. It is optional.

Install

There aren’t really any macros here, either. You basically just want to put whatever commands here that are necessary to install. If you have make install available to you in the package you are building, put that here. If not, you can either patch the makefile for a make install and just do a make install here, or you can hand install them here with sh commands. You can consider your current directory to be the toplevel of the source directory.
The variable RPM_BUILD_ROOT is available to tell you the path set as the Buildroot: in the header. Using build roots are optional but are highly recommended because they keep you from cluttering your system with software that isn’t in your RPM database (building an RPM doesn’t touch your database…you must go install the binary RPM you just built to do that).

Optional pre and post Install/Uninstall Scripts

You can put scripts in that get run before and after the installation and uninstallation of binary packages. A main reason for this is to do things like run ldconfig after installing or removing packages that contain shared libraries. The macros for each of the scripts is as follows:

  • %pre is the macro to do pre-install scripts.
  • %post is the macro to do post-install scripts.
  • %preun is the macro to do pre-uninstall scripts.
  • %postun is the macro to do post-uninstall scripts.

The contents of these sections should just be any sh style script, though you do not need the #!/bin/sh.

Files

This is the section where you must list the files for the binary package. RPM has no way to know what binaries get installed as a result of make install. There is NO way to do this. Some have suggested doing a find before and after the package install. With a multiuser system, this is unacceptable as other files may be created during a package building process that have nothing to do with the package itself.
There are some macros available to do some special things as well. They are listed and described here:

  • %doc is used to mark documentation in the source package that you want installed in a binary install. The documents will be installed in /usr/doc/$NAME-$VERSION-$RELEASE. You can list multiple documents on the command line with this macro, or you can list them all separately using a macro for each of them.
  • %config is used to mark configuration files in a package. This includes files like sendmail.cf, passwd, etc. If you later uninstall a package containing config files, any unchanged files will be removed and any changed files will get moved to their old name with a .rpmsave appended to the filename. You can list multiple files with this macro as well.
  • %dir marks a single directory in a file list to be included as being owned by a package. By default, if you list a directory name WITHOUT a %dir macro, EVERYTHING in that directory is included in the file list and later installed as part of that package.
  • %defattr allows you to set default attributes for files listed after the defattr declaration. The attributes are listed in the form (mode, owner, group) where the mode is the octal number representing the bit pattern for the new permissions (like chmod would use), owner is the username of the owner, and group is the group you would like assigned. You may leave any field to the installed default by simply placing a  in its place, as was done in the mode field for the example package.
  • %files -f <filename> will allow you to list your files in some arbitrary file within the build directory of the sources. This is nice in cases where you have a package that can build it’s own filelist. You then just include that filelist here and you don’t have to specifically list the files.

The biggest caveat in the file list is listing directories. If you list /usr/bin by accident, your binary package will contain every file in /usr/bin on your system.

Building It

The Source Directory Tree

The first thing you need is a properly configured build tree. This is configurable using the /etc/rpmrc file. Most people will just use /usr/src.
You may need to create the following directories to make a build tree:

  • BUILD is the directory where all building occurs by RPM. You don’t have to do your test building anywhere in particular, but this is where RPM will do it’s building.
  • SOURCES is the directory where you should put your original source tar files and your patches. This is where RPM will look by default.
  • SPECS is the directory where all spec files should go.
  • RPMS is where RPM will put all binary RPMs when built.
  • SRPMS is where all source RPMs will be put.

Building the Package with RPM

nce you have a spec file, you are ready to try and build your package. The most useful way to do it is with a command like the following:

rpm -ba foobar-1.0.spec

There are other options useful with the -b switch as well:

  • p means just run the prep section of the specfile.
  • l is a list check that does some checks on %files.
  • c do a prep and compile. This is useful when you are unsure of whether your source will build at all. It seems useless because you might want to just keep playing with the source itself until it builds and then start using RPM, but once you become accustomed to using RPM you will find instances when you will use it.
  • ido a prep, compile, and install.
  • b prep, compile, install, and build a binary package only.
  • abuild it all (both source and binary packages).

There are several modifiers to the -b switch. They are as follows:

  • –short-circuit will skip straight to a specified stage (can only be used with c and i).
  • –clean removes the build tree when done.
  • –keep-temps will keep all the temp files and scripts that were made in /tmp. You can actually see what files were created in /tmp using the -v option.
  • –test does not execute any real stages, but does keep-temp.

Reference:
http://www.ibiblio.org/pub/linux/docs/HOWTO/other-formats/html_single/RPM-HOWTO.html

Tagged : / / / / / / /