Complete guide of TypeScript certification courses, tutorials & training

Tutorials

TypeScript lets you write JavaScript the method you actually want to. TypeScript is a typed superset of JavaScript that amasses to plain JavaScript. A TypeScript is a pure object concerned with classes, interfaces, and statically typed like C# or Java. Mastering TypeScript can support programmers to write object-oriented programs and have them compiled to JavaScript, both on the client-side and server-side.

TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. It is pure object-oriented with classes, interfaces, and statically typed programming languages like C# or Java. You will need a compiler to compile and generate the code in the file. Basically, TypeScript is the ES6 version of JavaScript with some additional features.

TypeScript used for:

TypeScript remains a superset of the JavaScript language that devises a single open-source compiler and is developed mostly using a single vendor Microsoft. The goalmouth of TypeScript is to help catch faults first through a type system and to variety JavaScript development more proficient.

Why do we need TypeScript?

TypeScript simplifies JavaScript code, making it easier to read and correct. TypeScript offers highly creative development tools for JavaScript IDEs and performs, like static checking. TypeScript creates code easier to read and comprehend. Using TypeScript, we can create a huge development over basic JavaScript.

What do I need to learn to use TypeScript?

TypeScript is basically a JS linter. Or, JS with certification that the compiler can understand. Therefore, indifference to other languages comparable to CoffeeScript which enhances syntactic sugar, or else PureScript which does not look similar to JavaScript at all, you do not need to learn a ration to start writing TypeScript code. Kinds in TS are optional, and all JS file is an effective TypeScript file. But the compiler will protest if you have type mistakes in your first files, it does stretch you back to a JavaScript file that works as it did before. Anywhere you are, TypeScript will meet you there, and then it is informal to build up your skills progressively.

How to get started with TypeScript?

To compile your TS code, you need to install tsc short for the TypeScript compiler. The easiest way to do it is over the terminal. This can remain done simply through npm by using the following command:

If you want to use TypeScript with Visual Studio Code, there is a handy guide on their website.

Once you have installed tsc, you can compile your files with tsc filename.ts.

Migrating your files from JavaScript to TypeScript

Let’s say that we want to change the following JavaScript file to TypeScript due to odd behavior:

Good news. Any JS file is technically a valid TypeScript file, so you’re up to a great start – just switch the file extension to .ts from .js.

TypeScript has type inference, which means that it can automatically infer some of the types you use without you adding them. In this case, it presumes that the function sums two variables of type any, which is true but of no great use right now.

If we want to sum only numbers, we can add a type signature to my_sum to make it accept only numbers.

Now, TypeScript provides us with an error.

Good thing we found where the error is: To additional escape errors like these, you can also add kind definitions to variables.

TypeScript is quite flexible in what it can do and how it can help you.

Features of TypeScript:

To sum it up, I think TypeScript will remain to rise in popularity aimed at the predictable future. It offers great development knowledge, does not have much competition, and enjoys high acceptance rates among new open-source projects.

Object-Oriented Language: TypeScript offers a whole feature of an object-oriented programming language such as inheritance, classes, interfaces, modules, etc. In TypeScript, we can write code for both server-side as well as client-side development.

TypeScript supports JavaScript libraries: TypeScript supports all JavaScript elements. It permits the developers to use existing JavaScript code with TypeScript. Here, we can use each of the JavaScript frameworks, tools, and other libraries easily.

JavaScript is TypeScript: It means the code written in JavaScript with valid .js extension can be transformed to TypeScript through changing the extension from (.js to .ts) and compiled with other TypeScript files.

TypeScript is portable: TypeScript is portable because it can be executed on any device, browser, or operating system. It can be run in any environment where JavaScript runs on. It is not specific to any virtual machine for execution.

DOM Manipulation: TypeScript can be used to manipulate the DOM for removing or adding elements related to JavaScript.

TypeScript is just JS: TypeScript code is not executed on any browsers straight. The program written in TypeScript constantly starts with JavaScript then ends with JavaScript. Henceforth, we only need to know JavaScript to use it in TypeScript. The code written in TypeScript is compiled and transformed into its JavaScript comparable for the execution. This procedure is recognized as Trans-piled. With the support of JavaScript code, browsers can read the TypeScript code and show the output.

Advantage of TypeScript over JavaScript:

  • TypeScript continuously highlights errors at compilation time throughout the time of development, while JavaScript points out errors at the runtime.
  • TypeScript supports strongly typed or static typing, but this is not in JavaScript.
  • TypeScript runs on some browser or JavaScript engine.
  • Great tooling supports with IntelliSense, which offers active hints as the code is added.
  • It has a namespace concept through defining a module.

TypeScript Variables:

A variable is the storage location, which remains used to store worth info to be referenced and used by programs. It acts as a container for worth in code and must be professed earlier the use. We can declare a variable by using the var keyword. In TypeScript, the variable follows the same identification rule as of JavaScript variable declaration. These rules are –

  • The variable name must be an alphabet or numeric digit.
  • The variable name cannot start with digits.
  • The variable name cannot contain spaces and special personalities, but the underscore (_) and the dollar ($) sign.

In ES6, we can describe variables using the let and const keywords. These variables have similar syntax for variable declaration and initialization but differ in scope and usage. In TypeScript, there is continuously suggested to define a variable using the let keyword because it delivers the type protection.

The let keyword is similar to the var keyword in some respects, and const is a let that prevents re-assignment to a variable.

Collections in TypeScript:

TypeScript supports two kinds of collections:

arrays (where all the members are of the same type and are accessed by position)

tuples (where each member can be of a different type).

Control Statements in TypeScript:

1. If the statement

2. If else statement

3. If else if statement

If Statement

If the statement is used to execute a block of statements if the specified condition is true.

Syntax:

If else statement

If else statement is used to execute whichever of two blocks of statements depends upon the situation. If the situation is true and if the block will execute or else the block will execute.

Syntax:

If else if statement

If else statement is used to execute one block of statements from many depends upon the condition. If condition1 is true then the block of statements1 will be executed, else if condition2 is a true block of statements2 is executed, and so on. If no condition is true, and else block of statements will remain executed.

Syntax:

TypeScript Control Statements Example:

What is a TypeScript application?

Software, web development, programming concept. Abstract Programming language and program code on screen laptop. Laptop and icons company network. Technology process of Software development (Software, web development, programming concept. Abstract Pr

TypeScript is a programming language developed then maintained through Microsoft. It is a strict syntactical superset of JavaScript and adds elective static typing to the language. TypeScript might be used to develop JavaScript applications for both server-side and client-side execution as with Node. Js or Deno.

Scope of TypeScript:

Variable scopes in TypeScript: Here scope means the discernibility of variable. The scope describes that we can access the variable or not. TypeScript variables can be of the following scopes:

Local Scope: As the name stated, are professed within the block like methods, loops, etc. Local variables are available only within the construct where they are declared.

Global Scope: If the variable is professed outside the construct then we can access the variable anyplace. This is recognized as Global Scope.

Class Scope: If a variable is professed inside the class and we can access that variable within the class only.

What are the Objectives for TypeScript Training?

•             Comprehend TypeScript conceptions

•             Apply different techniques to visualize data using multiple graphs and dashboards

•             Tool TypeScript in the organization to monitor operative intelligence

•             Troubleshoot various application log issues using SPL (Search Processing Language)

•             Implement indexers, forwarders, deployment servers, and deployers in TypeScript.

What are the benefits of TypeScript Certification?

Certifications continuously play a critical role in any occupation. You might find some Pay-Per-Click Intermediate specialists who will tell you that certifications do not hold considerable value; this certification validates an individual’s capability to generate complex searches, reports, and dashboards with Pay-Per-Click Intermediate’s core software to become the most out of their data. A Pay-Per-Click Intermediate Core Certified Operator can search, use fields, use lookups, and make basic statistical bits of intelligence then dashboards in the Pay-Per-Click Intermediate Initiative or Pay-Per-Click Intermediate Cloud Platforms. This certification validates an individual’s capability to navigate then use the Pay-Per-Click Intermediate Software.

Agenda of TypeScript

TypeScript Course

Hello everybody, if you are thinking of learning TypeScript this year then looking for some exceptional resources like books, courses, and tutorials, then you have come to the right place. Now my last limited articles, I have shared several of the best Angular framework tutorials and courses, and today, I am going to share several of the best TypeScript online courses you can join to learn it through yourself. DevOpsSchool is one of the best institutes for certification.

Conclusions

Overall, TypeScript is a great tool to have in your toolset even if you do not use it to its full capability. It is informal to start small and grow slowly, learning and adding new features as you go. TypeScript is practical and welcoming to beginners, so there is no need to be afraid.

I hope this article will be useful in your TypeScript. If you want help or have some questions, be sure to ask them on our social media like Twitter or Facebook.

Tagged : / / / / /

What is the difference between terraform and Ansible?

In today’s growing world of DevOps, big player are started implementing business processes on IaC (Infrastructure as Code). IaC work as to simplify the process of large-scale management. Modern IaC tools simplify the configuration to resolve server problems in a quick time.

Terraform and Ansible are two popular frontline line DevOps tools that provision and configure servers. Ansible is the more mature between the two, originating in early 2012 and Terraform is a Hashicorp product and was first introduce in 2014.  As the DevOps industry is gaining momentum, so also Ansible and Terraform gaining their popularity with this trend. Both the tools used in deploying the code and infrastructure, in simple Ansible acts as a configuration management solution, Terraform is a service orchestration tool.

In this blog, we point our focus on Terraform vs Ansible, a discussion that is highly dominating the current DevOps market. Let’s first known about these tools:

What is Ansible?

Ansible is an open-source automation tool, or you can also called platform, used for IT tasks such as configuration management, application deployment, intraservice orchestration, and provisioning. Automation is crucial these days, with IT environments that are too complex and often need to scale too quickly for system administrators and developers to keep up if they had to do everything manually.

In simple words, it frees up time and increases efficiency. And also it is rapidly rising to the top in the world of automation tools.

Benefits of Ansible

  • Free: Ansible is an open-source tool.
  • Very simple to set up and use: No special coding skills are necessary to use Ansible’s playbooks.
  • Powerful: Ansible lets you model even highly complex IT workflows.
  • Flexible: Orchestrate the entire application environment no matter where it’s deployed. You can also customize it based on your needs.
  • Agentless: No need to install any other software or firewall ports on the client systems you want to automate. You also don’t have to set up a separate management structure.
  • Efficient: You don’t need to install any extra software, there’s more room for application resources on your server.

What is Terraform?

Terraform is an open source, CLI-based infrastructure as code tool created by Hashicorp.

Terraform is an infrastructure as code tool that helps you to build, change, and infrastructure safely and efficiently. This process includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. Terraform can manage both existing service providers and custom in-house solutions.

Benefits of Using Terraform

So now you know what Terraform is and how it works, let’s take a look on the top reasons why you should start using Terraform today:

  • Improve multi-cloud infrastructure deployment
  • Automated infrastructure management
  • Infrastructure as code
  • Reduced development costs
  • Reduced time to provision

What is the difference between terraform and Ansible?

In this section, let’s check the difference between the two. Meanwhile both are designed for similar purposes, both have definitely laid a foundation for lifecycle management frameworks. Now, both our players in their battle has placed their cards on the table. Let us find out their differences on some of major factors:

Conclusion

The possibility of a perfect answer on “Why Terraform” or “Why not Ansible” depends mainly on your requirements. Both the tools have so many similarities and also some fair differences as well. So, which one is the best? From a practical perspective, it is advisable to use Ansible for configuration management and Terraform for orchestration. The primary purpose of Terraform is orchestration, and it is considerably intuitive.

Also from career aspect, opportunities for these skilled professionals are increasing significantly with huge scope for career growth.

According to Indeed.com, the average salary of a professional of these skills is $177,530 per annum.

Both Ansible and Terraform are the leading data analytics tool which is adopted by many MNCs worldwide. With this, the demand for the professionals is gradually.

Below you can watch and learn Ansible and Terraform Tutorials

Ansible Advance Tutorial – Intro, Adhoc Command, Inventory, Playbook

Terraform Basic Tutorial with Demo

Hope you find this answer helpful.

Tagged : / / / / /

Integration of Jboss and Apache2 and SSL

My Application(.ear) is running in Jboss with any issues on 7001 port. There are following requirement as such with me.

Task 1. Integrate Jboss with Apache2 so all the request should be coming from Apache Instead of jboss

Task 2. Implement SSLwith apache2 so it should open with https instead of http.

For task 1, I have followed carefully community.jboss.org/wiki/UsingModjk12WithJBoss with some issues. 1. Application is getting up and running without any issues but logout has some issues. 2. I want to stop JBOSS access point but not getting any clue

For task 2 Once this is up and running, i will have to implement SSL with Apache so it should only get open with HTTPS instead of http. any help on this front as well..any links or Reference.

To follow this issues properly, you can find my work update on this link..I will keep posted the issues…

Tagged : / / / / /

Kunena Migration

Before starting the migration:

  • upgrade both the source and target websites to the latest available versions of the component
  • backup the source and target websites

Migrate core functions with JUpgrade or SP Upgrade or similar so that user ids are preserved.

Export the 24 x Joomla 1.5 Kunena databases with “Quick” and “SQL” options (Note: you can multiple select tables and export them as one file):

  • jos_kunena_announcement
  • jos_kunena_attachments
  • jos_kunena_attachments_bak
  • jos_kunena_categories
  • jos_kunena_config
  • jos_kunena_config_backup
  • jos_kunena_favorites
  • jos_kunena_groups
  • jos_kunena_messages
  • jos_kunena_messages_text
  • jos_kunena_moderation
  • jos_kunena_polls
  • jos_kunena_polls_options
  • jos_kunena_polls_users
  • jos_kunena_ranks
  • jos_kunena_sessions
  • jos_kunena_smileys
  • jos_kunena_subscriptions
  • jos_kunena_subscriptions_categories
  • jos_kunena_thankyou
  • jos_kunena_users
  • jos_kunena_users_banned
  • jos_kunena_version
  • jos_kunena_whoisonline

If necessary, amend table prefixes by searching and replacing all of the old prefixes e.g. jos_ to j25_ inside the file(s).

Delete the 22 x Joomla 2.5 Kunena databases.

Import the Joomla 1.5 Kunena databases into Joomla 2.5.

Copy across files in /media/kunena/attachments/

In Category Manager, reapply the permissions as these don’t seem to be copied across.

References:
http://www.kunena.org/forum/159-K-17-Common-Questions/111394-Transfer-kunena-17-with-joomla-15—in-joomla-17
http://www.kunena.org/forum/159-k-16-and-k-17-common-questions/103459-merged-topic-how-to-move-my-kunena-forum-from-one-site-to-another#103539

Tagged : / / / /

How To scp, ssh and rsync without prompting for password using OpenSSH

Verify that local-host and remote-host is running openSSH
ssh -V
OpenSSH_4.3p2, OpenSSL 0.9.8b 04 May 2006

Lets say you want to copy between two hosts host_src and host_dest. host_src is the host where you would run the scp, ssh or rsyn command, irrespective of the direction of the file copy!
1. On host_src, run this command as the user that runs scp/ssh/rsync
$ ssh-keygen -t rsa
This will prompt for a passphrase. Just press the enter key. It’ll then generate an identification (private key) and a public key. Do not ever share the private key with anyone! ssh-keygen shows where it saved the public key. This is by default ~/.ssh/id_rsa.pub:
Your public key has been saved in <your_home_dir>/.ssh/id_rsa.pub

2. Transfer the id_rsa.pub file to host_dest by either ftp, scp, rsync or any other method.

3. On host_dest, login as the remote user which you plan to use when you run scp, ssh or rsync on host_src.

4. Copy the contents of id_rsa.pub to ~/.ssh/authorized_keys
$ cat id_rsa.pub >>~/.ssh/authorized_keys
$ chmod 700 ~/.ssh/authorized_keys
If this file does not exists, then the above command will create it. Make sure you remove permission for others to read this file. If its a public key, why prevent others from reading this file? Probably, the owner of the key has distributed it to a few trusted users and has not placed any additional security measures to check if its really a trusted user.

5. Note that ssh by default does not allow root to log in. This has to be explicitly enabled on host_dest. This can be done by editing /etc/ssh/sshd_config and changing the option of PermitRootLogin from no to yes. Don’t forget to restart sshd so that it reads the modified config file. Do this only if you want to use the root login.

Putty using Windows:

If you are trying using windows, you can use puttygen to generate public/private keys and do the configuration in putty as in image below to connect to other server…

 

PIC TO BE UPLOADED>

 

 

Reference.
http://www.thegeekstuff.com/2008/10/perform-ssh-and-scp-without-password-from-ssh2-to-openssh/
http://blogs.oracle.com/jkini/entry/how_to_scp_scp_and
http://www.thegeekstuff.com/2008/06/perform-ssh-and-scp-without-entering-password-on-openssh/
http://www.csua.berkeley.edu/~ranga/notes/ssh_nopass.html

Tagged : / / / /

Apaceh HTACCESS Tutorial

Introduction
What Can I Do?
Creating A .htaccess File
What is .htaccess
Why Not to use .htaccess
Error Documents | Custom Error Pages
Blocking users by IP
Blocking users/ sites by referrer
Block traffic from a single referrer
Block traffic from multiple referrers
Blocking bad bots and site rippers
Change your default directory page
Redirects
Prevent viewing of .htaccess file
Adding MIME Types
Preventing hot linking of images and other file types
Preventing Directory Listing
Save bandwidth with .htaccess!
Disable directory listings
Hot link prevention techniques
Protecting your images and (zip) files from linking
Reference

 

Introduction

In this tutorial you will find out about the .htaccess file and the power it has to improve your website. Although .htaccess is only a file, it can change settings on the servers and allow you to do many different things, the most popular being able to have your own custom 404 error pages. .htaccess isn’t difficult to use and is really just made up of a few simple instructions in a text file.
What Can I Do?

You may be wondering what .htaccess can do, or you may have read about some of its uses but don’t realise how many things you can actually do with it.

There is a huge range of things .htaccess can do including: password protecting folders, redirecting users automatically, custom error pages, changing your file extensions, banning users with certian IP addresses, only allowing users with certain IP addresses, stopping directory listings and using a different file as the index file.
Creating A .htaccess File

Creating a .htaccess file may cause you a few problems. Writing the file is easy, you just need enter the appropriate code into a text editor (like notepad). You may run into problems with saving the file. Because .htaccess is a strange file name (the file actually has no name but a 8 letter file extension) it may not be accepted on certain systems (e.g. Windows 3.1). With most operating systems, though, all you need to do is to save the file by entering the name as:

“.htaccess”

(including the quotes). If this doesn’t work, you will need to name it something else (e.g. htaccess.txt) and then upload it to the server. Once you have uploaded the file you can then rename it using an FTP program.

Warning

Before beginning using .htaccess, I should give you one warning. Although using .htaccess on your server is extremely unlikely to cause you any problems (if something is wrong it simply won’t work), you should be wary if you are using the Microsoft FrontPage Extensions. The FrontPage extensions use the .htaccess file so you should not really edit it to add your own information. If you do want to (this is not recommended, but possible) you should download the .htaccess file from your server first (if it exists) and then add your code to the beginning. 
What is .htaccess
.htaccess is a configuration file for use on web servers running the Apache Web Server software. When a .htaccess file is placed in a directory which is in turn ‘loaded via the Apache Web Server’, then the .htaccess file is detected and executed by the Apache Web Server software. These .htaccess files can be used to alter the configuration of the Apache Web Server software to enable/disable additional functionality and features that the Apache Web Server software has to offer. These facilities include basic redirect functionality, for instance if a 404 file not found error occurs, or for more advanced functions such as content password protection or image hot link prevention.

.htaccess files (or “distributed configuration files”) provide a way to make configuration changes on a per-directory basis. A file, containing one or more configuration directives, is placed in a particular document directory, and the directives apply to that directory, and all subdirectories thereof.

Simply put, they are invisible plain text files where one can store server directives. Server directives are anything you might put in an Apache config file (httpd.conf) or even a php.ini**, but unlike those “master” directive files, these .htaccess directives apply only to the folder in which the .htaccess file resides, and all the folders inside.

This ability to plant .htaccess files in any directory of our site allows us to set up a finely-grained tree of server directives, each subfolder inheriting properties from its parent, whilst at the same time adding to, or over-riding certain directives with its own .htaccess file. For instance, you could use .htacces to enable indexes all over your site, and then deny indexing in only certain subdirectories, or deny index listings site-wide, and allow indexing in certain subdirectories. One line in the .htaccess file in your root and your whole site is altered. From here on, I’ll probably refer to the main .htaccess in the root of your website as “the master .htaccess file”, or “main” .htaccess file.

There’s a small performance penalty for all this .htaccess file checking, but not noticeable, and you’ll find most of the time it’s just on and there’s nothing you can do about it anyway, so let’s make the most of it..

Why Not to use .htaccess
There are two main reasons to avoid the use of .htaccess files.

  1. The first of these is performance. When AllowOverride is set to allow the use of .htaccess files, Apache will look in every directory for .htaccess files. Thus, permitting .htaccess files causes a performance hit, whether or not you actually even use them! Also, the .htaccess file is loaded every time a document is requested.

However, putting this configuration in your server configuration file will result in less of a performance hit, as the configuration is loaded once when Apache starts, rather than every time a file is requested.

  1. The second consideration is one of security. You are permitting users to modify server configuration, which may result in changes over which you have no control. Carefully consider whether you want to give your users this privilege.

htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R–R–). This makes the file usable by the server, but prevents it from being read by a browser, which can seriously compromise your security.

Error Documents |  Custom Error Pages
In order to specify your own ErrorDocuments, you need to be slightly familiar with the server returned error codes. (List to the right). You do not need to specify error pages for all of these, in fact you shouldn’t. An ErrorDocument for code 200 would cause an infinite loop, whenever a page was found…this would not be good.
In order to specify your own customized error documents, you simply need to add the following command, on one line, within your htaccess file:

ErrorDocument code /directory/filename.ext
or
ErrorDocument 404 /errors/notfound.html
This would cause any error code resulting in 404 to be forward to yoursite.com/errors/notfound.html

Likewise with:
ErrorDocument 500 /errors/internalerror.html

Successful Client Requests Client Request Redirected
200 OK 300 Multiple Choices
201 Created 301 Moved Permanently
202 Accepted 302 Moved Temporarily
203 Non-Authorative Information 303 See Other
204 No Content 304 Not Modified
205 Reset Content 305 Use Proxy
206 Partial Content
Client Request Errors Server Errors
400 Bad Request 500 Internal Server Error
401 Authorization Required 501 Not Implemented
402 Payment Required (not used yet) 502 Bad Gateway
403 Forbidden 503 Service Unavailable
404 Not Found 504 Gateway Timeout
405 Method Not Allowed 505 HTTP Version Not Supported
406 Not Acceptable (encoding)
407 Proxy Authentication Required
408 Request Timed Out
409 Conflicting Request
410 Gone
411 Content Length Required
412 Precondition Failed
413 Request Entity Too Long
414 Request URI Too Long
415 Unsupported Media Type
 
 

If you were to use an error document handler for each of the error codes I mentioned, the htaccess file would look like the following (note each command is on its own line):
ErrorDocument 400 /errors/badrequest.html
ErrorDocument 401 /errors/authreqd.html
ErrorDocument 403 /errors/forbid.html
ErrorDocument 404 /errors/notfound.html
ErrorDocument 500 /errors/serverr.html
You can specify a full URL rather than a virtual URL in the ErrorDocument string (http://yoursite.com/errors/notfound.html vs. /errors/notfound.html). But this is not the preferred method by the server’s happiness standards.
You can also specify HTML, believe it or not!
ErrorDocument 401 “<body bgcolor=#ffffff><h1>You have
to actually <b>BE</b> a <a href=”#”>member</A> to view
this page, Colonel!

The only time I use that HTML option is if I am feeling particularly saucy, since you can have so much more control over the error pages when used in conjunction with xSSI or CGI or both. Also note that the ErrorDocument starts with a ” just before the HTML starts, but does not end with one…it shouldn’t end with one and if you do use that option, keep it that way. And again, that should all be on one line, no naughty word wrapping!

Blocking users by IP
Is there a pesky person perpetrating pain upon you? Stalking your site from the vastness of the electron void? Blockem! In your htaccess file, add the following code–changing the IPs to suit your needs–each command on one line each:

order allow,deny
deny from 123.45.6.7
deny from 012.34.5.

allow from allYou can deny access based upon IP address or an IP block. The above blocks access to the site from 123.45.6.7, and from any sub domain under the IP block 012.34.5. (012.34.5.1, 012.34.5.2, 012.34.5.3, etc.) I have yet to find a useful application of this, maybe if there is a site scraping your content you can block them, who knows.

You can also set an option for deny from all, which would of course deny everyone. You can also allow or deny by domain name rather than IP address (allow from .javascriptkit.com works for www.javascriptkit.com or virtual.javascriptkit.com, etc.)

Blocking users/ sites by referrer

Blocking users or sites that originate from a particular domain is another useful trick of .htaccess. Lets say you check your logs one day, and see tons of referrals from a particular site, yet upon inspection you can’t find a single visible link to your site on theirs. The referral isn’t a “legitimate” one, with the site most likely hot linking to certain files on your site such as images,

.css files, or files you can’t even make out. Remember, your logs will generate a referrer entry for any kind of reference to your site that has a traceable origin.

Before I get to the code itself, it’s important to note that blocking access by referrer in .htaccess requires the help of the Apache module mod_rewrite to make out the referrer first. This module is installed by default on most servers (ask your host if you’re not sure). So, to deny access all traffic that originate from a particular domain (referrers) to your site, use the following code:

Block traffic from a single referrer:
RewriteEngine on
# Options +FollowSymlinks
RewriteCond %{HTTP_REFERER} badsite\.com [NC]
RewriteRule .* – [F]

Block traffic from multiple referrers

RewriteEngine on
# Options +FollowSymlinks
RewriteCond %{HTTP_REFERER} badsite\.com [NC,OR]
RewriteCond %{HTTP_REFERER} anotherbadsite\.com
RewriteRule .* – [F]

In the “single referrer” case above, “badsite\.com” is the domain you wish to block. Note the backslash proceeding the period (“.”) to actually donate a period, as in Regular Expressions, a period donates any character, which is not what we want. The flag “[NC]” is added to the end of the domain to make it case insensitive, so whether the domain is “badsite.com”, “Badsite.com” etc, however bad it gets, it gets blocked. Finally, the last line in the .htaccess file specifies that the action to take when a match is found is to fail the request, meaning the referrer traffic will hit a 403 Forbidden error. The only difference between blocking a single referrer and multiple referrers is the modified [NC, OR] flag in the later case to every domain but the last.

Now, you may have noticed the line “Options +FollowSymlinks” above, which is commented. Uncomment this line if your server isn’t configured with FollowSymLinks in its <directory> section in httpd.conf, and you get a 500 Internal Server error when using the code above as is.

Blocking bad bots and site rippers (aka offline browsers)
Below is a useful code block you can insert into.htaccess file for blocking a lot of the known bad bots and site rippers currently out there. It is derived from my reading of the excellent discussion “A close to perfect .htaccess file”, specifically, “A close to perfect .htaccess file II.” Simply add the below code to your .htaccess file:
Refer More…
http://www.webmasterworld.com/forum13/687-1-10.htm
http://www.webmasterworld.com/forum92/205.htm

RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} ^BlackWidow [OR]
RewriteCond %{HTTP_USER_AGENT} ^Bot\ mailto:craftbot@yahoo.com [OR]
RewriteCond %{HTTP_USER_AGENT} ^ChinaClaw [OR]
RewriteCond %{HTTP_USER_AGENT} ^Custo [OR]
RewriteCond %{HTTP_USER_AGENT} ^DISCo [OR]
RewriteCond %{HTTP_USER_AGENT} ^Download\ Demon [OR]
RewriteCond %{HTTP_USER_AGENT} ^eCatch [OR]
RewriteCond %{HTTP_USER_AGENT} ^EirGrabber [OR]
RewriteCond %{HTTP_USER_AGENT} ^EmailSiphon [OR]
RewriteCond %{HTTP_USER_AGENT} ^EmailWolf [OR]
RewriteCond %{HTTP_USER_AGENT} ^Express\ WebPictures [OR]
RewriteCond %{HTTP_USER_AGENT} ^ExtractorPro [OR]
RewriteCond %{HTTP_USER_AGENT} ^EyeNetIE [OR]

 

Bots that are listed above will all receive a 403 Forbidden error when trying to view your site. The amount of bandwidth savings and decrease in server resource usage as a result may be significant in many cases.

Change your default directory page
Some of you may be wondering, just what in the world is a DirectoryIndex? Well, grasshopper, this is a command which allows you to specify a file that is to be loaded as your default page whenever a directory or url request comes in, that does not specify a specific page. Tired of having yoursite.com/index. html come up when you go to yoursite.com? Want to change it to be yoursite.com/ILikePizzaSteve.html that comes up instead? No problem!

DirectoryIndex filename.html

This would cause filename.html to be treated as your default page, or default directory page. You can also append other filenames to it. You may want to have certain directories use a script as a default page. That’s no problem too!

DirectoryIndex filename.html index.cgi index.pl default.htm

Placing the above command in your htaccess file

will cause this to happen: When a user types in yoursite.com, your site will look for filename.html in your root directory (or any directory if you specify this in the global htaccess), and if it finds it, it will load that page as the default page. If it does not find filename.html, it will then look for index.cgi; if it finds that one, it will load it, if not, it will look for index.pl and the whole process repeats until it finds a file it can use. Basically, the list of files is read from left to right.

Redirects
Ever go through the nightmare of changing significantly portions of your site, then having to deal with the problem of people finding their way from the old pages to the new? It can be nasty. There are different ways of redirecting pages, through http-equiv, javascript or any of the server-side languages. And then you can do it through htaccess, which is probably the most effective, considering the minimal amount of work required to do it.
htaccess uses redirect to look for any request for a specific page (or a non-specific location, though this can cause infinite loops) and if it finds that request, it forwards it to a new page you have specified:
Redirect /olddirectory/oldfile.html http://yoursite.com/newdirectory/newfile.html
Note that there are 3 parts to that, which should all be on one line : the Redirect command, the location of the file/directory you want redirected relative to the root of your site (/olddirectory/oldfile.html = yoursite.com/olddirectory/oldfile.html) and the full URL of the location you want that request sent to. Each of the 3 is separated by a single space, but all on one line. You can also redirect an entire directory by simple using Redirect /olddirectory http://yoursite.com/newdirectory/
Using this method, you can redirect any number of pages no matter what you do to your directory structure. It is the fastest method that is a global affect.

Prevent viewing of .htaccess file
If you use htaccess for password protection, then the location containing all of your password information is plainly available through the htaccess file. If you have set incorrect permissions or if your server is not as secure as it could be, a browser has the potential to view an htaccess file through a standard web interface and thus compromise your site/server. This, of course, would be a bad thing. However, it is possible to prevent an htaccess file from being viewed in this manner:
<Files .htaccess>
order allow,deny
deny from all
</Files>
The first line specifies that the file named .htaccess is having this rule applied to it. You could use this for other purposes as well if you get creative enough.
If you use this in your htaccess file, a person trying to see that file would get returned (under most server configurations) a 403 error code. You can also set permissions for your htaccess file via CHMOD, which would also prevent this from happening, as an added measure of security: 644 or RW-R–R–

Adding MIME Types
What if your server wasn’t set up to deliver certain file types properly? A common occurrence with MP3 or even SWF files. Simple enough to fix:

AddType application/x-shockwave-flash swf

AddType is specifying that you are adding a MIME type. The application string is the actual parameter of the MIME you are adding, and the final little bit is the default extension for the MIME type you just added, in our example this is swf for ShockWave File.

Preventing hot linking of images and other file types
In the webmaster community, “hot linking” is a curse phrase. Also known as “bandwidth stealing” by the angry site owner,  it refers to linking directly to non-html objects not on one own’s server, such as images, .js files etc. The victim’s server in this case is robbed of bandwidth (and in turn money) as the violator enjoys showing content without having to pay for its deliverance. The most common practice of hot linking pertains to another site’s images.
Using .htaccess, you can disallow hot linking on your server, so those attempting to link to an image or CSS file on your site, for example, is either blocked (failed request, such as a broken image) or served a different content (ie: an image of an angry man) . Note that mod_rewrite needs to be enabled on your server in order for this aspect of .htaccess to work. Inquire your web hostregarding this.
With all the pieces in place, here’s how to disable hot linking of certain file types on your site, in the case below, images, JavaScript (js) and CSS (css) files on your site. Simply add the below code to your .htaccess file, and upload the file either to your root directory, or a particular subdirectory to localize the effect to just one section of your site:

RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?mydomain.com/.*$ [NC]
RewriteRule \.(gif|jpg|js|css)$ - [F]

Be sure to replace “mydomain.com” with your own. The above code creates a failed request when hot linking of the specified file types occurs. In the case of images, a broken image is shown instead.

Serving alternate content when hot linking is detected

You can set up your .htaccess file to actually serve up different content when hot linking occurs. This is more commonly done with images, such as serving up an Angry Man image in place of the hot linked one. The code for this is:

RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?mydomain.com/.*$ [NC]
RewriteRule \.(gif|jpg)$ http://www.mydomain.com/angryman.gif [R,L]

Same deal- replace mydomain.com with your own, plus angryman.gif.
Time to pour a bucket of cold water on hot linking!

Preventing Directory Listing
Do you have a directory full of images or zips that you do not want people to be able to browse through? Typically a server is setup to prevent directory listing, but sometimes they are not. If not, become self-sufficient and fix it yourself:
IndexIgnore *
The * is a wildcard that matches all files, so if you stick that line into an htaccess file in your images directory, nothing in that directory will be allowed to be listed.
On the other hand, what if you did want the directory contents to be listed, but only if they were HTML pages and not images? Simple says I:
IndexIgnore *.gif *.jpg
This would return a list of all files not ending in .jpg or .gif, but would still list .txt, .html, etc.
And conversely, if your server is setup to prevent directory listing, but you want to list the directories by default, you could simply throw this into an htaccess file the directory you want displayed:
Options +Indexes
If you do use this option, be very careful that you do not put any unintentional or compromising files in this directory. And if you guessed it by the plus sign before Indexes, you can throw in a minus sign (Options -Indexes) to prevent directory listing entirely–this is typical of most server setups and is usually configured elsewhere in the apache server, but can be overridden through htaccess.
If you really want to be tricky, using the +Indexes option, you can include a default description for the directory listing that is displayed when you use it by placing a file called HEADER in the same directory. The contents of this file will be printed out before the list of directory contents is listed. You can also specify a footer, though it is called README, by placing it in the same directory as the HEADER. The README file is printed out after the directory listing is printed.
Typically servers are setup to prevent directory listing, but often they aren’t. If you have a directory full of downloads or images that you don’t want people to be able to browse through, add the following line to your .htaccess file…
IndexIgnore *
The * matches all files. If, for example, you want to prevent only listing of images, use…
IndexIgnore *.gif *.jpg

Alternative Index Files
You may not always want to use index.htm or index.html as your index file for a directory, for example if you are using PHP files in your site, you may want index.php to be the index file for a directory. You are not limited to ‘index’ files though. Using .htaccess you can set foofoo.blah to be your index file if you want to!

Alternate index files are entered in a list. The server will work from left to right, checking to see if each file exists, if none of them exisit it will display a directory listing (unless, of course, you have turned this off).

DirectoryIndex index.php index.php3 messagebrd.pl index.html index.htm

Password Protection with .htaccess

Although there are many uses of the .htaccess file, by far the most popular, and probably most useful, is being able to relaibly password protect directories on websites. Although JavaScript etc. can also be used to do this, only .htaccess has total security (as someone must know the password to get into the directory, there are no ‘back doors’)

The .htaccess File

Adding password protection to a directory using .htaccess takes two stages. The first part is to add the appropriate lines to your .htaccess file in the directory you would like to protect. Everything below this directory will be password protected:

AuthName “Section Name”
AuthType Basic
AuthUserFile /full/path/to/.htpasswd
Require valid-user

There are a few parts of this which you will need to change for your site. You should replace “Section Name” with the name of the part of the site you are protecting e.g. “Members Area”.

The /full/parth/to/.htpasswd should be changed to reflect the full server path to the .htpasswd file (more on this later). If you do not know what the full path to your webspace is, contact your system administrator for details.

The .htpasswd File

Password protecting a directory takes a little more work than any of the other .htaccess functions because you must also create a file to contain the usernames and passwords which are allowed to access the site. These should be placed in a file which (by default) should be called .htpasswd. Like the .htaccess file, this is a file with no name and an 8 letter extension. This can be placed anywhere within you website (as the passwords are encrypted) but it is advisable to store it outside the web root so that it is impossible to access it from the web.

Entering Usernames And Passwords

Once you have created your .htpasswd file (you can do this in a standard text editor) you must enter the usernames and passwords to access the site. They should be entered as follows:

username:password

where the password is the encrypted format of the password. To encrypt the password you will either need to use one of the premade scripts available on the web or write your own. There is a good username/password service at the KxS site which will allow you to enter the user name and password and will output it in the correct format.

For multiple users, just add extra lines to your .htpasswd file in the same format as the first. There are even scripts available for free which will manage the .htpasswd file and will allow automatic adding/removing of users etc.

Accessing The Site

When you try to access a site which has been protected by .htaccess your browser will pop up a standard username/password dialog box. If you don’t like this, there are certain scripts available which allow you to embed a username/password box in a website to do the authentication. You can also send the username and password (unencrypted) in the URL as follows:

http://username:password@www.website.com/directory/

Save bandwidth with .htaccess!
If you pay for your bandwidth, this wee line could save you hard cash..

Save me hard cash! and help the internet! 
<ifModule mod_php4.c>
php_value zlib.output_compression 16386
</ifModule>

All it does is enables PHP’s built-in transparent zlib compression. This will half your bandwidth usage in one stroke, more than that, in fact. Of course it only works with data being output by the PHP module, but if you design your pages with this in mind, you can use php echo statements, or better yet, php “includes” for your plain html output and just compress everything!Remember, if you run phpsuexec, you’ll need to put php directives in a local php.ini file, not .htaccess. See here for more details.

“Bandwidth stealing,” also known as “hot linking,” is linking directly to non-html objects on another server, such as images, electronic books etc. The most common practice of hot linking pertains to another site’s images.
To disallow hot linking on your server, create the following .htaccess file and upload it to the folder that contains the images you wish to protect…

RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?YourSite\.com/.*$ [NC]
RewriteRule \.(gif|jpg)$ – [F]

Replace “YourSite.com” with your own. The above code causes a broken image to be displayed when it’s hot linked. If you’d like to display an alternate image in place of the hot linked one, replace the last line with…

RewriteRule \.(gif|jpg)$ http://www.YourSite.com/stop.gif [R,L]
Replace “YourSite.com” and stop.gif with your real names.

Hide and deny files..
Do you remember I mentioned that any file beginning with .ht is invisible? ..”almost every web server in the world is configured to ignore them, by default” and that is, of course, because .ht_anything files generally have server directives and passwords and stuff in them, most  servers will have something like this in their main configuration..
Standard setting.. 
<Files ~ “^\.ht”>
Order allow,deny
Deny from all
Satisfy All
</Files>

which instructs the server to deny access to any file beginning with .ht, effectively protecting our .htaccess and other files. The “.” at the start prevents them being displayed in an index, and the .ht prevents them being accessed. This version..
ignore what you want 
<Files ~ “^.*\.([Ll][Oo][Gg])”>
Order allow,deny
Deny from all
Satisfy All
</Files>

tells the server to deny access to *.log files. You can insert multiple file types into each rule, separating them with a pipe “|”, and you can insert multiple blocks into your .htaccess file, too. I find it convenient to put all the files starting with a dot into one, and the files with denied extensions into another, something like this..
the whole lot 
# deny all .htaccess, .DS_Store $hî†é and ._* (resource fork) files
<Files ~ “^\.([Hh][Tt]|[Dd][Ss]_[Ss]|[_])”>
Order allow,deny
Deny from all
Satisfy All
</Files>

# deny access to all .log and .comment files
<Files ~ “^.*\.([Ll][Oo][Gg]|[cC][oO][mM][mM][eE][nN][tT])”>
Order allow,deny
Deny from all
Satisfy All
</Files>

would cover all ._* resource fork files, .DS_Store files (which the Mac Finder creates all over the place) *.log files, *.comment files and of course, our .ht* files. You can add whatever file types you need to protect from direct access. I think it’s clear now why the file is called “.htaccess”.

Disable directory listings
Preventing directory listings can be very useful if for example, you have a directory containing important ‘.zip’ archive files or to prevent viewing of your image directories. Alternatively it can also be useful to enable directory listings if they are not available on your server, for example if you wish to display directory listings of your important ‘.zip’ files.
To prevent directory listings, create a .htaccess file following the main instructions and guidance which includes the following text:
IndexIgnore *
The above lines tell the Apache Web Server to prevent directory listings of directories and files within the directory containing the .htaccess file. The ‘*’ represents a wildcard, this means it will not display any files. It is possible to prevent listings of only certain file types, so for example you can show listings of ‘.html’ files but not your ‘.zip’ files.
To prevent listing ‘.zip’ files, create a .htaccess file following the main instructions and guidance which includes the following text:
IndexIgnore *.zip
The above line tells the Apache Web Server to list all files except those that end with ‘.zip’.
To prevent listing multiple file types, create a .htaccess file following the main instructions and guidance which includes the following text:

IndexIgnore *.zip *.jpg *.gif

The above line tells the Apache Web Server to list all files except those that end with ‘.zip’, ‘.jpg’ or ‘.gif’.
Alternatively, if your server does not allow directory listings and you would like to enable them, create a .htaccess file following the main instructions and guidance which includes the following text:
Options +Indexes

The above line tells the Apache Web Server to enable directory listing within the directory containing this .htaccess file. You can also reverse this to disable directory listings by replacing the plus sign before the text ‘Indexes’ with a minus sign. e.g. ‘Options -Indexes’.
You can also include a default description for the directory listings that is displayed at the top of the page by placing a file called ‘HEADER’ in the same directory. The contents of this file are displayed before the list of directory contents. You can also include a footer, by creating a file called ‘README’. The contents of this file are displayed after the list of directory contents.

Hot link prevention techniques
Hot link prevention refers to stopping web sites that are not your own from displaying your files or content, e.g. stopping visitors from other web sites. This is most commonly used to prevent other web sites from displaying your images but it can be used to prevent people using your JavaScript or CSS (cascading style sheet) files. The problem with hot linking is it uses your bandwidth, which in turn costs money, hot linking is often referred to as ‘bandwidth theft’.
Using .htaccess we can prevent other web sites from sourcing your content, and can even display different content in turn. For example, it is common to display what is referred to as an ‘angry man’ images instead of the desired images.
Note, this functionality requires that ‘mod_rewrite’ is enabled on your server. Due to the demands that can be placed on system resources, it is unlikely it is enabled so be sure to check with your system administrator or web hosting company.
To set-up hot link prevention for ‘.gif’, ‘.jpg’ and ‘.css’ files, create a .htaccess file following the main instructions and guidance which includes the following text:

RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?yourdomain.com/.*$ [NC]
RewriteRule \.(gif|jpg|css)$ – [F]

The above lines tell the Apache Web Server to block all links to ‘.gif’, ‘.jpg’ and ‘.css’ files which are not from the domain name ‘http://www.yourdomain.com/‘. Before uploading your .htaccess file ensure you replace ‘yourdomain.com’ with the appropriate web site address.
To set-up hot link prevention for ‘.gif’, ‘.jpg’ files which displays alternate content (such as an angry man image), create a .htaccess file following the main instructions and guidance which includes the following text:

RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?yourdomain.com/.*$ [NC]
RewriteRule \.(gif|jpg)$ http://www.yourdomain.com/hotlink.jpg [R,L]

The above lines tell the Apache Web Server to block all links to ‘.gif’ and ‘.jpg’ files which are not from the domain name ‘http://www.yourdomain.com/‘ and to display the file ‘http://www.yourdomain.com/hotlink.jpg‘ instead. Before uploading your .htaccess file ensure you replace ‘yourdomain.com’ with the appropriate web site address.

Protecting your images and (zip) files from linking
Module: mod_rewrite

Put a file named .htaccess in the directory where you have the images located.
AuthUserFile /dev/null
AuthGroupFile /dev/null

RewriteEngine On

RewriteCond %{HTTP_REFERER} !^http://www.widexl.com.* [NC]
RewriteCond %{HTTP_REFERER} !^http://ma.widexl.com.* [NC]
RewriteCond %{HTTP_REFERER} !^http://members.widexl.com.* [NC]
RewriteCond %{HTTP_REFERER} !^http://widexl.com.* [NC]
RewriteCond %{HTTP_REFERER} !^http://212.204.218.80.* [NC]

RewriteRule /* http://widexl.com/index.html [R,L]
By the RewriteCond change the web address name for who are allowed to use your images.
By the RewriteRule change the web address name where to send the ones who are linking to.

Note: You need to write for every web address (hostname) a new line.
Remember: http://widexl.com is not the same like http://www.widexl.com

Reference:
http://www.javascriptkit.com/howto/htaccess14.shtml
http://www.freewebmasterhelp.com/tutorials/htaccess/
http://www.htaccesstutorial.net/
http://www.askapache.com/htaccess/apache-htaccess.html
http://corz.org/serv/tricks/htaccess.php
http://www.askapache.com/htaccess/apache-htaccess.html
http://www.htaccess-guide.com/index.php?a=1
http://www.askapache.com/htaccess/apache-htaccess.html
http://www.askapache.com/htaccess/apache-htaccess.html#htaccess-code-examples
http://httpd.apache.org/docs/
Apache Tutorial: .htaccess Files – Official Apache documentation and guidelines.
Apache Directives – A list of directives available in the standard Apache distribution.
Apache Documentation – Main Apache Web Server documentation.
HotScripts.com – User management resources.
The CGI Resource Index – Password protection resources.
CGI-Index.com – Security resources.
DirectoryPass – DirectoryPass is a very powerful, yet simple to use .htaccess management system.
Locked Area – Locked Area is a highly sophisticated password protection and membership management system written in Perl.
OpenCrypt – OpenCrypt is a fully automated and self-managing membership/user management system which is more than capable of the most complex multi-domain installations, whilst still being usable in the most simple of circumstances.

Tagged : / / /

Basic RPM Tutorials

Basic RPM Tutorials

Introduction:

RPM is the RPM Package Manager. It is an open packaging system available for anyone to use. It allows users to take source code for new software and package it into source and binary form such that binaries can be easily installed and tracked and source can be rebuilt easily. It also maintains a database of all packages and their files that can be used for verifying packages and querying for information about files and/or packages.
Red Hat, Inc. encourages other distribution vendors to take the time to look at RPM and use it for their own distributions. RPM is quite flexible and easy to use, though it provides the base for a very extensive system.

RPM Basic usage command
In its simplest form, RPM can be used to install packages:
rpm -i foobar-1.0-1.i386.rpm
The next simplest command is to uninstall a package:

rpm -e foobar

While these are simple commands, rpm can be used in a multitude of ways. To see which options are available in your version of RPM, type:

rpm –help
You can find more details on what those options do in the RPM man page, found by typing:
man rpm

Let’s say you delete some files by accident, but you aren’t sure what you deleted. If you want to verify your entire system and see what might be missing, you would do:

rpm -Va

Let’s say you run across a file that you don’t recognize. To find out which package owns it, you would do:

rpm -qf /usr/X11R6/bin/xjewel

Now you want to see what files the koules RPM installs. You would do:

rpm -qpi koules-1.2-2.i386.rpm

Building RPMs

The basic procedure to build an RPM is as follows:

  • Get the source code you are building the RPM for to build on your system.
  • Make a patch of any changes you had to make to the sources to get them to build properly.
  • Make a spec file for the package.
  • Make sure everything is in its proper place.
  • Build the package using RPM.

The Spec File

Here is a small spec file (eject-2.0.2-1.spec):

Summary: A program that ejects removable media using software control.
Name: eject
Version: 2.0.2
Release: 3
Copyright: GPL
Group: System Environment/Base
Source: http://metalab.unc.edu/pub/Linux/utils/disk-management/eject-2.0.2.tar.gz
Patch: eject-2.0.2-buildroot.patch
BuildRoot: /var/tmp/%{name}-buildroot
%description
The eject program allows the user to eject removable media
(typically CD-ROMs, floppy disks or Iomega Jaz or Zip disks)
using software control. Eject can also control some multi-
disk CD changers and even some devices' auto-eject features.
Install eject if you'd like to eject removable media using
software control.
%prep
%setup -q
%patch -p1 -b .buildroot
%build
make RPM_OPT_FLAGS="$RPM_OPT_FLAGS"

%install
rm -rf $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/usr/bin
mkdir -p $RPM_BUILD_ROOT/usr/man/man1

install -s -m 755 eject $RPM_BUILD_ROOT/usr/bin/eject
install -m 644 eject.1 $RPM_BUILD_ROOT/usr/man/man1/eject.1

%clean
rm -rf $RPM_BUILD_ROOT

%files
%defattr(-,root,root)
%doc README TODO COPYING ChangeLog

/usr/bin/eject
/usr/man/man1/eject.1

%changelog
* Sun Mar 21 1999 Cristian Gafton <gafton@redhat.com>
- auto rebuild in the new build environment (release 3)

* Wed Feb 24 1999 Preston Brown <pbrown@redhat.com>
- Injected new description and group.

[ Some changelog entries trimmed for brevity.  -Editor. ]
 

The Header
The header has some standard fields in it that you need to fill in. There are a few caveats as well. The fields must be filled in as follows:
The header has some standard fields in it that you need to fill in. There are a few caveats as well. The fields must be filled in as follows:

  • Summary: This is a one line description of the package.
  • Name: This must be the name string from the rpm filename you plan to use.
  • Version: This must be the version string from the rpm filename you plan to use.
  • Release: This is the release number for a package of the same version (ie. if we make a package and find it to be slightly broken and need to make it again, the next package would be release number 2).
  • Copyright: This line tells how a package is copyrighted. You should use something like GPL, BSD, MIT, public domain, distributable, or commercial.
  • Group: This is a group that the package belongs to in a higher level package tool or the Red Hat installer.
  • Source: This line points at the HOME location of the pristine source file. It is used if you ever want to get the source again or check for newer versions. Caveat: The filename in this line MUST match the filename you have on your own system (ie. don’t download the source file and change its name). You can also specify more than one source file using lines like:
Source0: blah-0.tar.gz
Source1: blah-1.tar.gz
Source2: fooblah.tar.gz

These files would go in the SOURCES directory. (The directory structure is discussed in a later section, “The Source Directory Tree”.)
·  Patch: This is the place you can find the patch if you need to download it again. Caveat: The filename here must match the one you use when you make YOUR patch. You may also want to note that you can have multiple patch files much as you can have multiple sources. ] You would have something like:

Patch0: blah-0.patch
Patch1: blah-1.patch
Patch2: fooblah.patch

These files would go in the SOURCES directory.
Group: This line is used to tell high level installation programs (such as Red Hat’s gnorpm) where to place this particular program in its hierarchical structure. You can find the latest description in /usr/doc/rpm*/GROUPS.
·  BuildRoot: This line allows you to specify a directory as the “root” for building and installing the new package. You can use this to help test your package before having it installed on your machine.
·  %description It’s not really a header item, but should be described with the rest of the header. You need one description tag per package and/or subpackage. This is a multi-line field that should be used to give a comprehensive description of the package.

Prep

This is the second section in the spec file. It is used to get the sources ready to build. Here you need to do anything necessary to get the sources patched and setup like they need to be setup to do a make.
One thing to note: Each of these sections is really just a place to execute shell scripts. You could simply make an sh script and put it after the %prep tag to unpack and patch your sources. We have made macros to aid in this, however.
The first of these macros is the %setup macro. In its simplest form (no command line options), it simply unpacks the sources and cd‘s into the source directory. It also takes the following options:

  • -n name will set the name of the build directory to the listed name. The default is $NAME-$VERSION. Other possibilities include $NAME${NAME}${VERSION}, or whatever the main tar file uses. (Please note that these “$” variables are notreal variables available within the spec file. They are really just used here in place of a sample name. You need to use the real name and version in your package, not a variable.)
  • -c will create and cd to the named directory before doing the untar.
  • -b # will untar Source# before cd‘ing into the directory (and this makes no sense with -c so don’t do it). This is only useful with multiple source files.
  • -a # will untar Source# after cd’ing into the directory.
  • -T This option overrides the default action of untarring the Source and requires a -b 0 or -a 0 to get the main source file untarred. You need this when there are secondary sources.
  • -D Do not delete the directory before unpacking. This is only useful where you have more than one setup macro. It should only be used in setup macros after the first one (but never in the first one).

The next of the available macros is the %patch macro. This macro helps automate the process of applying patches to the sources. It takes several options, listed below:

  • # will apply Patch# as the patch file.
  • -p # specifies the number of directories to strip for the patch(1) command.
  • -P The default action is to apply Patch (or Patch0). This flag inhibits the default action and will require a 0 to get the main source file untarred. This option is useful in a second (or later) %patch macro that required a different number than the first macro.
  • You can also do %patch# instead of doing the real command: %patch # -P
  • -b extension will save originals as filename.extension before patching.

That should be all the macros you need. After you have those right, you can also do any other setup you need to do via sh type scripting. Anything you include up until the %build macro (discussed in the next section) is executed via sh. Look at the example above for the types of things you might want to do here.

Build

There aren’t really any macros for this section. You should just put any commands here that you would need to use to build the software once you had untarred the source, patched it, and cd’ed into the directory. This is just another set of commands passed to sh, so any legal sh commands can go here (including comments).
The variable RPM_OPT_FLAGS is set using values in /usr/lib/rpm/rpmrc. Look there to make sure you are using values appropriate for your system (in most cases you are). Or simply don’t use this variable in your spec file. It is optional.

Install

There aren’t really any macros here, either. You basically just want to put whatever commands here that are necessary to install. If you have make install available to you in the package you are building, put that here. If not, you can either patch the makefile for a make install and just do a make install here, or you can hand install them here with sh commands. You can consider your current directory to be the toplevel of the source directory.
The variable RPM_BUILD_ROOT is available to tell you the path set as the Buildroot: in the header. Using build roots are optional but are highly recommended because they keep you from cluttering your system with software that isn’t in your RPM database (building an RPM doesn’t touch your database…you must go install the binary RPM you just built to do that).

Optional pre and post Install/Uninstall Scripts

You can put scripts in that get run before and after the installation and uninstallation of binary packages. A main reason for this is to do things like run ldconfig after installing or removing packages that contain shared libraries. The macros for each of the scripts is as follows:

  • %pre is the macro to do pre-install scripts.
  • %post is the macro to do post-install scripts.
  • %preun is the macro to do pre-uninstall scripts.
  • %postun is the macro to do post-uninstall scripts.

The contents of these sections should just be any sh style script, though you do not need the #!/bin/sh.

Files

This is the section where you must list the files for the binary package. RPM has no way to know what binaries get installed as a result of make install. There is NO way to do this. Some have suggested doing a find before and after the package install. With a multiuser system, this is unacceptable as other files may be created during a package building process that have nothing to do with the package itself.
There are some macros available to do some special things as well. They are listed and described here:

  • %doc is used to mark documentation in the source package that you want installed in a binary install. The documents will be installed in /usr/doc/$NAME-$VERSION-$RELEASE. You can list multiple documents on the command line with this macro, or you can list them all separately using a macro for each of them.
  • %config is used to mark configuration files in a package. This includes files like sendmail.cf, passwd, etc. If you later uninstall a package containing config files, any unchanged files will be removed and any changed files will get moved to their old name with a .rpmsave appended to the filename. You can list multiple files with this macro as well.
  • %dir marks a single directory in a file list to be included as being owned by a package. By default, if you list a directory name WITHOUT a %dir macro, EVERYTHING in that directory is included in the file list and later installed as part of that package.
  • %defattr allows you to set default attributes for files listed after the defattr declaration. The attributes are listed in the form (mode, owner, group) where the mode is the octal number representing the bit pattern for the new permissions (like chmod would use), owner is the username of the owner, and group is the group you would like assigned. You may leave any field to the installed default by simply placing a  in its place, as was done in the mode field for the example package.
  • %files -f <filename> will allow you to list your files in some arbitrary file within the build directory of the sources. This is nice in cases where you have a package that can build it’s own filelist. You then just include that filelist here and you don’t have to specifically list the files.

The biggest caveat in the file list is listing directories. If you list /usr/bin by accident, your binary package will contain every file in /usr/bin on your system.

Building It

The Source Directory Tree

The first thing you need is a properly configured build tree. This is configurable using the /etc/rpmrc file. Most people will just use /usr/src.
You may need to create the following directories to make a build tree:

  • BUILD is the directory where all building occurs by RPM. You don’t have to do your test building anywhere in particular, but this is where RPM will do it’s building.
  • SOURCES is the directory where you should put your original source tar files and your patches. This is where RPM will look by default.
  • SPECS is the directory where all spec files should go.
  • RPMS is where RPM will put all binary RPMs when built.
  • SRPMS is where all source RPMs will be put.

Building the Package with RPM

nce you have a spec file, you are ready to try and build your package. The most useful way to do it is with a command like the following:

rpm -ba foobar-1.0.spec

There are other options useful with the -b switch as well:

  • p means just run the prep section of the specfile.
  • l is a list check that does some checks on %files.
  • c do a prep and compile. This is useful when you are unsure of whether your source will build at all. It seems useless because you might want to just keep playing with the source itself until it builds and then start using RPM, but once you become accustomed to using RPM you will find instances when you will use it.
  • ido a prep, compile, and install.
  • b prep, compile, install, and build a binary package only.
  • abuild it all (both source and binary packages).

There are several modifiers to the -b switch. They are as follows:

  • –short-circuit will skip straight to a specified stage (can only be used with c and i).
  • –clean removes the build tree when done.
  • –keep-temps will keep all the temp files and scripts that were made in /tmp. You can actually see what files were created in /tmp using the -v option.
  • –test does not execute any real stages, but does keep-temp.

Reference:
http://www.ibiblio.org/pub/linux/docs/HOWTO/other-formats/html_single/RPM-HOWTO.html

Tagged : / / / / / / /

Clover and Maven working with Distributed Applications

clover-and-maven-working-with-distributed-applications

1.       Configure maven clover plugin.

2.       Build the all components with clover enabled.

3.       Deploy the clover enabled build to test server.

4.       Run the tests.

5.       Create & Review the Code Coverage Report.

Configure Maven Clover Plugin

Configure the maven plugin in pom.xml .If you are having multi module projects; you can configure the plugin in parent-pom instead of modifying each module’s pom xml.


Build all components with clover enabled.

Run the following command.

 

  “mvn -U clover2:setup package clover2:aggregate

 

                If you got something like this

[INFO] Loaded from: C:\Documents and Settings\Administrator\.m2\repository\com\cenqua\clover\clover\2.6.3\clover-2.6.3.jar

[INFO] Clover: Commercial License registered to ABC Corporation.

[INFO] Creating new database at ‘C:\p4_depot\trunk\4A\target\clover\clover.db’.

[INFO] Processing files at 1.5 source level.

[INFO] Clover all over. Instrumented 5 files (1 package).

[INFO] Elapsed time = 0.532 secs. (9.398 files/sec, 812.03 srclines/sec)

Congratulation, you get clover work with your source!!

 

Deploy the clover enabled build to test server.

Deploy the Clover enabled build to the server. The same process as normal

Copy the Clover registry file to the appropriate directory on each of the test servers

 

The registry file is the DB file create during compile, defined by initstring parametersclover‐setup task, this needs to occur after the Clover build is complete, and before you run your tests

 

Background: the Clover initstring

 

FileName: xxx.db

At build time, Clover constructs a registry of your source code, and writes it to a file at the location specified in the Clover initstring. When Clover‐ instrumented code is executed (e.g. by running a suite of unit tests), Clover looks in the same location for this registry file to initialise itself. Clover then records coverage data and writes coverage recording files next to the registry file during execution

Notes: gives the folder contains the registry file full control permissions

 

Recommended Permissions

Clover requires access to the Java system properties for runtime configurations, as well as read write access to areas of the file system to read the Clover coverage database and to write coverage information. Clover also uses a shutdown hook to ensure that it flushes any as yet unflushed coverage information to disk when Java exits. To support these requirements, the following security

permissions are recommended:

 

grant codeBase “file:/path/to/clover.jar” {

permission java.util.PropertyPermission “*”, “read”;

permission java.io.FilePermission “<>”, “read, write”;

permission java.lang.RuntimePermission “shutdownHooks”;

}

 

Grant Permissions to clover.jar

Edit the java.policy file of the java runtime on the test server

%JAVA_HOME%/jre/lib/security

 

Copy clover.jar and license file to the java runtime class path of the test servers

%JAVA_HOME%/jre/lib/ext

 

 

Run the test suite

Run the test suite as normal. Either automation test case or manual test case.

 

Create Code Coverage Report

Copy the coverage recording files to build machine.

 

Once test execution is complete, you will need to copy the coverage recording files from each remote machine to the initstring path on the build machine in order to generate coverage reports.

 

Background: CoverageRecording Files

 

Filename:xxx.dbHHHHHHH_TTTTTTTTTT or clover.dbHHHHHHH_TTTTTTTTTT.1 (where HHHHHHH and TTTTTTTTTT are both hex strings)

CoverageRecording files contain actual coverage data. When running instrumented code, Clover creates one or more Coverage Recorders. Each Coverage Recorder will write one CoverageRecording file. The number of Coverage Recorders created at runtime depends the nature of the application you are Clovering. In general a new Coverage Recorder will be created for each new ClassLoader instance that loads a Clovered class file. The first hex number in the filename (HHHHHHH) is a unique number based on the recording context. The second hex number (TTTTTTTTTT) is the timestamp (ms since epoch) of the creation of the Clover Recorder. CoverageRecording files are named this way to try to minimise the chance of a name clash. While it is theoretically possible that a name clash could occur, in practice the chances are very small.

CoverageRecording files are written during the execution of Clover‐instrumented code. CoverageRecording files are read during report generation or coverage browsing.

 

Run the generating report goal to create the report.

                                “mvn clover2:clover”

               

Tagged : / / / / / / / / / / / / / /

HOWTO: Install e17 from SVN/source on Ubuntu

install-e17-from-svn-source

E17 is a lightweight window manager/bundle of libraries for Unix based operating systems. E17 is designed to be both elegant and fast – two goals it succeeds at very well. The only problem is that installing E17 on Ubuntu (and it’s derivatives) is not a very straight forward process if you have never done it before, the following are the steps I have taken to get the E17 environment up and running on Ubuntu 9.10 (however it should work for all Ubuntu based systems).

Step 1: Install the build dependencies, to do this simply paste the following chunk of code into your favorite terminal and let it work it’s magik

sudo apt-get install xterm make gcc bison flex subversion cvs automake1.10 autoconf autotools-dev autoconf-archive libtool gettext libpam0g-dev libfreetype6-dev libpng12-dev zlib1g-dev libjpeg62-dev libtiff4-dev libungif4-dev librsvg2-dev libx11-dev libxcursor-dev libxrender-dev libxrandr-dev libxfixes-dev libxdamage-dev libxcomposite-dev libxss-dev libxp-dev libxext-dev libxinerama-dev libxft-dev libxfont-dev libxi-dev libxv-dev libxkbfile-dev libxkbui-dev libxres-dev libxtst-dev libltdl7-dev libglu1-xorg-dev libglut3-dev xserver-xephyr libdbus-1-dev liblua5.1-0-dev

Step 2: Now that we have all the dependencies installed, we are going to use the easy e17 script to download, compile, and install e17 from SVN. To do so click on the link I just provided to download the script. Then assuming you downloaded the file to the default Downloads folder run the following in terminal to get the install going

cd ~/Downloads && chmod +x easy_e17.sh && sudo ./easy_e17.sh -i

Go get a cup of coffee or something, the length of time the above command takes to complete depends on your Internet connection and computer speed.

Step 3: Now assuming the commands you ran in step 2 finishes without issues/errors check the output in terminal, it should mention some “environmental variables” that need to be set. Copy and paste each of the export lines it lists to you and run them in terminal.

Step 4: We need to copy the elightenment .desktop file to the proper location in your shared folder so it appears as a log in option in gdm/kdm. To do so run the following in terminal

sudo cp /opt/e17/share/xsessions/elightenment.desktop /usr/share/xsessions/enlightenment.desktop

Log out of your current desktop and select “Enlightenment” from the log in options you are presented with in your login manager.

Enjoy your new E17 powered desktop! Also please remember E17 is considered beta software – so it is not encouraged to use it on production machines. Lastly I would like to also say that while the default configuration of e17 appears crude at first, this is intentional. E17 is extremely customizable. Play with settings, move things around, add and remove objects and you will see creating a beautiful and customized desktop is just a few clicks away!

Source:

http://jeffhoogland.blogspot.com/2010/05/howto-install-e17-from-svnsource-on.html

Tagged : / / / / / / / / / / / / /

SSARC Utility & SSRESTOR Utility – Archive, Restore VSS Project – Guide

ssarc-utility-ssrestor-utility

Question: 

How to Archive VSS Project in Visual Source Safe (VSS)?

How to Restore VSS Project in  Visual Source Safe (VSS)?

What is SSARC Utility?

What is SSRESTOR Utility?

SSARC Utility

Allows you to archive files, projects, or old versions from a Visual SourceSafe database. Each time you run SSARC (Ssarc.exe), the utility asks only once before it deletes the files/projects. Visual SourceSafe also implements archive operations through its Archive wizard, available in Visual SourceSafe Administrator.

The utility SSRESTOR is used for restoration of archived files. Use of SSARC and SSRESTOR together allows wide-area Visual SourceSafe installations to move files and projects among databases.

Limitation: SSARC cannot create an archive file that is greater than 2 GB. If you try to archive a project larger than this, you will receive an Out of Memory and/or a CRC mismatch error message. To work around this limitation, you will need to archive each subproject.

 

Syntax

ssarc [-C][-D][-I-][-O][-S][-V][-X][-Y] <archive file to create> <files/projects to archive>

If you are not familiar with the syntax conventions used, < and > are used to delimit arguments. The < and > characters are not typed in. The | means a choice of options. Items enclosed in square brackets, [] are optional, and items not enclosed in square brackets are mandatory.

 

Utility Options

The following table describes options available with SSARC.

-C

Specifies a comment (standard Visual SourceSafe parameter). The comment is inserted into the Visual SourceSafe history if the items are deleted from the database as part of the archived record. The comment is also inserted into the archive file itself, if there is one.

-C-

Specifies no comment.

-D

Deletes the archived items from the database.

Specifiying the -d option will delete from your project the items that you are archiving. This will delete your project from Sourcesafe if you use it! Of course, you can restore from your backup file, but for normal use, and for smaller projects I don’t recommend this. This function obviously has its uses, but be sure you want to do this.

-D-

Controls whether Visual SourceSafe actually deletes anything from the database. If this option is not used, the default is to ask.

If the -d- option is used, then SSARC will not delete the projects the items you are archiving. The default is to prompt the user interactively, so if you want to run this from a script, you will need to choose one of the options, or use the -i- flag

-I-

Specifies no prompt for input (standard Visual SourceSafe parameter).

-O

Paginates or redirects output (standard Visual SourceSafe parameter).

-S<srcsafe.ini path>, <data path>

Specifies a path to Srcsafe.ini and the Data directory. The full syntax looks like -Sc: \VSS, OldDB. The item before the comma is the full path to Srcsafe.ini, and the item after the comma is the string in parentheses in a Data_Path setting. If there is no comma, a Srcsafe.ini path is indicated, but no data string. If the first character after the –S is a comma, a data string is specified, but no Srcsafe.ini path.

 

-V[D|L]<version>

Specifies a version number to archive (standard Visual SourceSafe parameter). Enter the version number in standard Visual SourceSafe format (number, date, or label). If this option is not used, SSARC operates on entire files or projects, instead of all versions up to and including a certain version.

Note that SSARC is generally inclusive. That is, if you type –V9, you create an archive file that contains version 9.0 and everything before it. The delete pass is also inclusive, that is, version 9.0 is actually deleted unless it is in the label format. In the latter case, the label is stored in the archive file but is not deleted. If –V is specified, you never delete the current version, even if you specify -D9/9/99. If you are using a version the label format, and the label has a space in it, you must place the entire option in quotation marks, for example, “-VThisOne”.

-X

Archives only deleted items in the specified files and projects. Deleted items are still stored in the Visual SourceSafe database unless the Destroy Permanently option is selected when performing a Delete command.

-X-

Archives all items in the specified files and projects.

-Y<user>,<pwd>

Specifies the user name and password (standard Visual SourceSafe parameter). An example is -YAdmin,Bunny.

archive file to create

Specifies the name of the archive file to create during the archive operation.

files/projects to archive

Specifies files and projects to back up.

Examples

SSARC -d- -yadmin,password archive.ssa $/
Backup the entire default Sourcesafe database to archive.ssa, leaving the database exactly as it is.
SSARC -d- "-vlProduction Release" -yadmin,password -olog.txt archive.ssa $/Test
Backup everything since the version labelled ‘Production Release’ and create a log file with the results of the archiving process.
SSARC -d -x -yadmin,password archive.ssa "$/Project Global Domination" $/OtherProject
Archive, and delete the deleted files from two projects.
SSARC -i -yadmin,password -olog.txt "-cArchive Everything" archive.ssa $/
Archive the entire Sourcesafe database, while creating a log file, adding a comment, and running non-interactively, suitable for a scheduled task.

SSRESTOR Utility

Restores information from a previously created archive. If the restore operation attempts to create a duplicate file or project name, the operation fails. Visual SourcSafe also implements restore operations through its Restore wizard, available in Visual SourceSafe Administrator.

Limitation: SSRESTOR cannot restore a project that is larger than 2 GB. A project larger than this cannot be archived.

 

Syntax:

ssrestor [-C][-I-][-L][-O][-P<project>][-S][-T][-X][-Y]

<archive file to restore> [files/projects]

 

Utility Options

The following table describes options available with this command.

-C

Specifies a comment (standard Visual SourceSafe parameter). The comment is applied to the history entry for restored item(s).

-C-

Specifies no comment.

-I-

Specifies no input (standard Visual SourceSafe parameter).

-L

Specifies a list only, without any restoration.

-LA

Specifies a list of all files and subprojects listed under a project, for example, project $/A.

-O

Redirects output (standard Visual SourceSafe parameter).

-P<project>

Specifies a project to which to restore content. For instance, if you archive $/A/BAR.C and then restore it, it will be restored as $/A/BAR.C.

-S<srcsafe.ini path><data path>

Specifies a path to Srcsafe.ini and the Data directory.

-T

Tests the archive file for corruption, but does not actually restore from the archive.

-X, -X-

Identifies an item that you want to restore, for example, $/a/b. This option distinguishes between deleted projects and undeleted projects that have the same names. For example, if you have deleted $a/b and specify the -X option, SSRESTOR restores the deleted $/a/b. If you specify -X-, SSRESTOR restores an undeleted $/a/b. Even though they have the same name, note that these are two different projects that the utility treats differently.

-Y<user>,<pwd>

Specifies user name and password (standard Visual SourceSafe parameter). An example is -YAdmin,Moggy.

archive file to restore

Specifies the name of the archive file from which to restore the database.

files/projects

Specifies files and projects to restore.

Examples

SSRESTOR -la -yadmin,password archive.ssa $/
Display a list of all of the files archived in archive.ssa.
SSRESTOR "-p$/Test 2" -sD:\newfolder\ -yadmin,password backup.ssa $/Test
Restores project $/Test to the database to a new location, $/Test 2, from the archive file backup.ssa, in a different Sourcesafe database.

Miscellaneous tips

  • Putting the path to the <VSS PATH>\win32 folder in your PATH environment variable makes shell commands a lot easier
  • If you get a message “Only ADMIN can run this utility”, it means that you are either not specifying the admin account, or the password is incorrect. Use "-yadmin,password" if the admin password is “password”, or "-yadmin," if the admin password is not set (not recommended!).
  • If you are using an option that has a space in the argument, enclose the whole thing with quote marks, i.e. “-p$/Project A”
  • A daily archive is easy to do with a Windows shell script, maybe something like this. Then use Scheduled Tasks to have this run every day.
    function q (str)
    	' to make the command a bit more readable, I hope
    	q = """" & str & """"
    end function

    PROJECT = “$/”
    USERID = “admin”
    PASSWORD = “admin”

    ‘ location of SSARC program
    SSARCPATH = “C:\program files\microsoft visual studio\common\vss\win32\ssarc.exe”
    ‘ folder of srcsafe.ini
    SRCSAFEINIPATH = “”
    ‘ prepended to filename in case you’re doing more than one.
    LABEL = “ARCHIVE”
    ‘ destination of archive files
    BACKUPFOLDER = “C:\BACKUPS\”

    ‘ generate a name based on the time.
    today = now()
    backupfilename = LABEL & “-” & formatdatetime(now,2) & “.ssa”

    cmd = q(SSARCPATH) & ” ” & _
    q(“-s” & SRCSAFEINIPATH & “,”) & ” -i- ” & ” -d- ” & _
    q(“-y” & USERID & “,” & PASSWORD) & ” ” & _
    q(BACKUPFOLDER & backupfilename) & ” ” & q(PROJECT)

    Set WshShell = WScript.CreateObject(“WScript.Shell”)
    wscript.echo cmd
    WshShell.run cmd
    set wsshell = nothing

Tagged : / / / / / / / / / / / / / / / / / / / /