Top 10 Sentiment Analysis Tools

What Is A Sentiment Analysis Tool?

A sentiment analysis tool is AI software that automatically analyzes text data to help you quickly understand how customers feel about your brand, product, or service. Sentiment analysis tools work by automatically detecting the emotion, tone, and urgency in online conversations, assigning them a positive, negative, or neutral tag, so you know which customer queries to prioritize. There are many sentiment analysis tools available, but not all are equal. Some are a lot easier to use than others, while some require an in-depth knowledge of data science.

Here’s an updated list of the top 10 sentiment analysis tools:

  1. IBM Watson Natural Language Understanding
  2. Google Cloud Natural Language API
  3. Microsoft Azure Text Analytics
  4. Amazon Comprehend
  5. Aylien Text Analysis
  6. MonkeyLearn
  7. Hugging Face Transformers
  8. RapidMiner
  9. Tweepy
  10. Lexalytics

1. IBM Watson Natural Language Understanding:

IBM Watson offers a powerful sentiment analysis API that provides accurate sentiment analysis along with other NLP capabilities.

Features:

  • Sentiment Analysis: Watson NLU can analyze text to determine the overall sentiment expressed, whether it is positive, negative, or neutral. It provides a sentiment score along with the sentiment label.
  • Entity Recognition: The tool can identify and classify entities mentioned in the text, such as people, organizations, locations, dates, and more. It helps in extracting important information and understanding the context.
  • Emotion Analysis: Watson NLU can detect emotions expressed in text, including joy, sadness, anger, fear, and disgust. It provides emotion scores for each category, allowing you to gauge the emotional tone of the text.

2. Google Cloud Natural Language API:

Google Cloud’s Natural Language API provides sentiment analysis, entity recognition, and other language processing features.

Features:

  • Sentiment Analysis: The API can analyze the sentiment of a given text, providing a sentiment score and magnitude. The score indicates the overall sentiment (positive or negative), while the magnitude represents the strength or intensity of the sentiment.
  • Entity Recognition: Google Cloud Natural Language API can identify and classify entities mentioned in the text, such as people, organizations, locations, dates, and more. It provides information about the type of entity and supports entity linking to additional information.
  • Entity Sentiment Analysis: In addition to entity recognition, the API can also provide sentiment analysis at the entity level. It assigns sentiment scores to individual entities mentioned in the text, indicating the sentiment associated with each entity.

3. Microsoft Azure Text Analytics:

Microsoft Azure Text Analytics is a cloud-based service provided by Microsoft that offers a variety of text analysis capabilities. It is part of the larger Azure Cognitive Services suite, specifically focused on processing and understanding natural language text.

Features:

  • Sentiment analysis
  • Key phrase extraction
  • Language detection
  • Used to analyze unstructured text for tasks
  • Built with best-in-class Microsoft machine-learning algorithms
  • Training data is not required to use this API

4. Amazon Comprehend:

Amazon Comprehend is a natural language processing (NLP) service provided by Amazon Web Services (AWS). It offers a range of powerful features for extracting insights and performing analysis on text data.

Features:

  • Sentiment Analysis: Amazon Comprehend can analyze text and determine the sentiment expressed, whether it is positive, negative, neutral, or mixed. It provides sentiment scores ranging from 0 to 1, indicating the level of sentiment intensity.
  • Entity Recognition: The service can identify and categorize entities mentioned in the text, such as people, organizations, locations, dates, and more. It offers pre-trained entity types and also allows customization for domain-specific entity recognition.
  • Key Phrase Extraction: Amazon Comprehend can extract key phrases or important terms from the text. This helps in understanding the main topics or subjects discussed within the text data.

5. Aylien Text Analysis:

Aylien Text Analysis API is a package of Natural Language Processing and Machine Learning-powered APIs for analyzing and extracting various kinds of information from the textual content. Text Analysis API supports multiple (human) languages which can be selected using the language parameter, supported by most of the endpoints.

Features:

  • Sentiment Analysis: Aylien Text Analysis can perform sentiment analysis on text, providing a sentiment score that indicates the overall sentiment expressed in the text, whether it is positive, negative, or neutral.
  • Entity Extraction: The tool can identify and extract entities mentioned in the text, such as people, organizations, locations, dates, and more. It provides structured information about the entities present in the text.
  • Concept Extraction: Aylien Text Analysis can identify and extract key concepts or topics discussed in the text. It helps in understanding the main ideas and themes present in the content.

6. MonkeyLearn:

MonkeyLearn is a no-code text analytics platform that offers pre-built and custom machine-learning models for sentiment analysis, entity recognition, topic classification, and more. It simplifies text analytics and visualization of customer feedback with its easy-to-use interface and powerful AI capabilities.

Features:

  • Provides an all-in-one text analysis and data visualization studio that enables users to gain instant insights when analyzing their data
  • Users can use MonkeyLearn’s ready-made machine-learning models or build and train their own code-free
  • Offers a range of pre-trained classifiers and extractors, including sentiment analysis and entity recognition
  • Users can easily import their dataset, define custom tags, and train their models in a simple UI
  • Offers business templates tailored for different scenarios, equipped with pre-made text analysis models and dashboards
  • Users can upload data, run the analysis, and get actionable insights instantly visualized
  • MonkeyLearn’s NPS Analysis template helps strengthen promoters, convert passives and detractors, and improve overall customer satisfaction

7. Hugging Face Transformers:

Hugging Face Transformers is an open-source library that provides pre-trained models for various NLP tasks, including sentiment analysis.

Features:

  • Pre-trained Models: Hugging Face Transformers offers a vast collection of pre-trained models for various NLP tasks, including text classification, sentiment analysis, named entity recognition, question answering, language translation, summarization, and more. These models are trained on large datasets and can be fine-tuned for specific tasks.
  • State-of-the-Art Models: Hugging Face Transformers includes state-of-the-art models like BERT, GPT, RoBERTa, and T5, which have achieved high performance on various NLP benchmarks and competitions.
  • Model Architecture Flexibility: The library provides an easy-to-use interface for loading and using pre-trained models, allowing you to apply them to your specific NLP tasks. It supports both PyTorch and TensorFlow backends, providing flexibility in choosing your preferred framework.

8. RapidMiner:

RapidMiner is an interesting option on this list. It doesn’t consider itself a “sentiment analysis tool” per se, but a data science platform that does text mining in unstructured data to figure out the sentiment. A few examples of the “unstructured data” they’re talking about online reviews, social media posts, call center transcriptions, claims forms, research journals, patent filings, and more.

Features:

  • Analyzes sources like social media, research journals, call center transcriptions, online reviews, forums, and patent filings for sentiment analysis.
  • Performs extraction, modeling, data cleansing, and deployment in the same environment.
  • Offers pre-built algorithms, model training, and data visualization.

9. Tweepy:

Tweepy is a Python library that simplifies the process of interacting with the Twitter API. It provides an easy-to-use interface for accessing Twitter’s platform and performing various tasks.

Features:

  • API Authorization: Tweepy handles the authentication process required to access the Twitter API. It supports various authentication methods, including OAuth 1a and OAuth 2.
  • Access to Twitter Data: Tweepy enables you to retrieve various types of Twitter data, such as tweets, user profiles, followers, and trends. It provides convenient methods to fetch this data using the Twitter API endpoints.
  • Streaming API: Tweepy supports the Streaming API provided by Twitter, allowing you to receive real-time data from Twitter in a continuous stream. This is useful for tracking specific keywords, hashtags, or users in real-time.

10. Lexalytics:

Lexalytics is another platform that will help you turn your text into profitable decisions. With their state-of-the-art natural language processing and machine learning technologies, they can transform any given text into actionable insights. Lexalytics helps explain why a customer is responding to your brand in a specific way, rather than how, using NLP to determine the intent of the sentiment expressed by the consumer online.

Features:

  • Uses NLP (Natural Language Processing) to analyze text and give it an emotional score.
  • Offers integration with valuable tools like Zapier, Angoss, Import.io, Voziq, Leanstack, etc.
  • Comes with a Semantria Cloud-based API that offers multiple industry packs with customizable language preferences.
  • Analyzes all kinds of documents on its Cloud API.
  • Offers support for 30 languages.
Tagged : / / /

Top 20 Machine Learning Frameworks

What is Machine Learning?

Machine Learning, ML for short, is an area of computational science that deals with the analysis and interpretation of patterns and structures in large volumes of data. Through it, we can infer insightful patterns from data sets to support business decision-making – without or with very little need for human interface.

In Machine Learning, we feed large volumes of data to a computer algorithm that then trains on it, analyzing it to find patterns and generating data-driven decisions and recommendations. If there are any errors or outliers in information identified, the algorithm is structured to take this new information as input to improve its future output for recommendations and decision-making.

Simply put, ML is a field in AI that supports organizations to analyze data, learn, and adapt on an ongoing basis to help in decision-making. It’s also worth noting that deep learning is a subset of machine learning.

What is a Machine Learning Framework?

A simplified definition would describe machine learning frameworks as tools or libraries that allow developers to easily build ML models or Machine Learning applications, without having to get into the nuts and bolts of the base or core algorithms. It provides more of an end-to-end pipeline for machine learning development.

Here are the top 20 machine learning frameworks:

  1. TensorFlow
  2. PyTorch
  3. scikit-learn
  4. Keras
  5. MXNet
  6. Caffe
  7. Theano
  8. Microsoft Cognitive Toolkit (CNTK)
  9. Spark MLlib
  10. H2O.ai
  11. LightGBM
  12. XGBoost
  13. CatBoost
  14. Fast.ai
  15. Torch
  16. CNTK (Microsoft Cognitive Toolkit)
  17. Deeplearning4j
  18. Mahout
  19. Accord.NET
  20. Shogun

1. TensorFlow:

Developed by Google’s Brain Team, TensorFlow is one of the most widely used machine learning frameworks. It provides a comprehensive ecosystem for building and deploying machine learning models, including support for deep learning. TensorFlow offers high-level APIs for ease of use and low-level APIs for customization.

Key Features:

  • Based on JavaScript
  • Open source and has extensive APIs
  • Can be used via script tags or via installation through npm
  • Runs on CPUs and GPUs
  • Extremely popular and has lots of community support

2. PyTorch:

PyTorch is a popular open-source machine learning framework developed by Facebook’s AI Research Lab. It has gained significant popularity due to its dynamic computational graph, which enables more flexibility during model development. PyTorch is widely used for research purposes and supports both deep learning and traditional machine learning models.

Key Features:

  • Supports cloud-based software development
  • Suitable for designing neural networks and Natural Language Processing
  • Used by Meta and IBM
  • Good for designing computational graphs
  • Compatible with Numba and Cython

3. scikit-learn:

scikit-learn is a Python library that provides a simple and efficient set of tools for data mining and machine learning. It offers a wide range of algorithms for classification, regression, clustering, and dimensionality reduction. scikit-learn is known for its user-friendly API and extensive documentation.

Key Features:

  • Works well with Python
  • The top framework for data mining and data analysis
  • Open-source and free

4. Keras:

Keras is a high-level neural networks API written in Python. Initially developed as a user-friendly interface for building deep learning models on top of TensorFlow, Keras has evolved into an independent framework. It provides an intuitive and modular approach to building neural networks and supports both convolutional and recurrent networks.

5. MXNet:

MXNet is a deep learning framework that emphasizes efficiency, scalability, and flexibility. It offers both imperative and symbolic programming interfaces, allowing developers to choose the approach that best suits their needs. MXNet is known for its support of distributed training, which enables training models on multiple GPUs or across multiple machines.

Key Features:

  • Adopted by Amazon for AWS
  • Supports multiple languages, including Python, JavaScript, Julia, C++, Scala, and Perl
  • Microsoft, Intel, and Baidu also support Apache MXNet
  • Also used by the University of Washington and MIT

6. Caffe:

Keeping speed, modularity, and articulation in mind, Berkeley Vision and Learning Center (BVLC) and community contributors came up with Caffe, a Deep Learning framework. Its speed makes it ideal for research experiments and production edge deployment. It comes with a BSD-authorized C++ library with a Python interface, and users can switch between CPU and GPU. Google’s DeepDream implements Caffe. However, Caffe is observed to have a steep learning curve, and it is also difficult to implement new layers with Caffe.

7. Theano:

Theano was developed at the LISA lab and was released under a BSD license as a Python library that rivals the speed of the hand-crafted implementations of C. Theano is especially good with multidimensional arrays and lets users optimize mathematical performances, mostly in Deep Learning with efficient Machine Learning Algorithms. Theano uses GPUs and carries out symbolic differentiation efficiently.

Several popular packages, such as Keras and TensorFlow, are based on Theano. Unfortunately, Theano is now effectively discontinued but is still considered a good resource in ML.

8. Microsoft Cognitive Toolkit (CNTK):

CNTK is a deep learning framework developed by Microsoft. It provides high-level abstractions and supports both convolutional and recurrent neural networks. CNTK is known for its scalability and performance, particularly in distributed training scenarios.

9. Spark MLlib :

Spark MLlib is a machine learning library provided by Apache Spark, an open-source big data processing framework. Spark MLlib offers a wide range of tools and algorithms for building scalable and distributed machine learning models. It is designed to work seamlessly with the Spark framework, enabling efficient processing of large-scale datasets.

10. H2O.ai :

H2O.ai is an open-source machine-learning platform that provides a range of tools and frameworks for building and deploying machine-learning models. It aims to make it easy for data scientists and developers to work with large-scale data and build robust machine-learning pipelines.

11. LightGBM:

LightGBM is an open-source gradient-boosting framework developed by Microsoft. It is specifically designed to be efficient, scalable, and accurate, making it a popular choice for various machine-learning tasks.

12. XGBoost:

XGBoost (Extreme Gradient Boosting) is a powerful and widely used open-source gradient boosting framework that has gained significant popularity in the machine learning community. It is designed to be efficient, scalable, and highly accurate for a variety of machine-learning tasks.

13. CatBoost:

CatBoost is an open-source gradient-boosting framework developed by Yandex, a Russian technology company. It is specifically designed to handle categorical features in machine learning tasks, making it a powerful tool for working with structured data.

14. Fast.ai:

Fast.ai is a comprehensive deep-learning library and educational platform that aims to democratize and simplify the process of building and training neural networks. It provides a high-level API on top of popular deep learning frameworks like PyTorch, allowing users to quickly prototype and iterate on their models.

15. Torch:

Torch, or PyTorch, is a widely used open-source deep learning framework that provides a flexible and efficient platform for building and training neural networks. It is developed and maintained by Facebook’s AI Research Lab (FAIR).

16. CNTK (Microsoft Cognitive Toolkit):

CNTK (Microsoft Cognitive Toolkit), now known as Microsoft Machine Learning for Apache Spark, is an open-source deep learning framework developed by Microsoft. It provides a flexible and scalable platform for building, training, and deploying deep learning models.

17. Deeplearning4j:

Deeplearning4j (DL4J) is an open-source deep-learning library specifically designed for Java and the Java Virtual Machine (JVM) ecosystem. It provides a comprehensive set of tools and capabilities for building and training deep neural networks in Java, while also supporting integration with other JVM-based languages like Scala and Kotlin.

18. Mahout:

Apache Mahout is an open-source machine learning library and framework designed to provide scalable and distributed implementations of various machine learning algorithms. It is part of the Apache Software Foundation and is built on top of Apache Hadoop and Apache Spark, making it well-suited for big data processing.

19. Accord.NET:

Accord.NET is an open-source machine learning framework for .NET developers. It provides a wide range of libraries and algorithms for various machine-learning tasks, including classification, regression, clustering, neural networks, image processing, and more. Accord.NET aims to make machine learning accessible and easy to use within the .NET ecosystem.

20. Shogun:

Shogun is an open-source machine-learning library that provides a comprehensive set of algorithms and tools for a wide range of machine-learning tasks. It is implemented in C++ and offers interfaces for several programming languages, including Python, Java, Octave, and MATLAB.

Tagged : / / /

Top 10 Static Code Analysis Tool | Best Static Code Analysis Tools List

top-10-static-code-analysis-tool
Software security is a very important concern for todays Software market and for that you need to do code analysis in the development lifecycle. Now we can not imagine ourselves to sit back and do manual reading each line of codes and find issues and bugs. Those days of manual review in the software development lifecycle to find the flaws in the codes are over now.
Now the mindsets has changed and developing quality & secure code from the beginning is on rise. This is the time of automation and developers & programmers are now shifting towards the adoption of tools which auto detects the flaws as soon as possible in the software development lifecycle.
As the process shifting towards the automation, static code analysis (SCA) has become an important part of creating quality code. Now the question here is, What is Static Code Analysis?

Static Code Analysis is a technique which quickly and automatically scan the code line by line to find security flaws and issues that might be missed in the development process before the software or application is released. It functions by reviewing the code without actually executing the code.

There are three main benefits of Static analysis tools
1. Automation —  Automation can save your time and energy which ultimately means you can invest your time and energy in some other aspects of development lifecycle, which will help you to release your software faster.
2. Security — Security is also one of the major concern and by adopting Static analysis you can cut the doubt of security vulnerabilities in your application, which will ensure that you are delivering a secure and reliable software.
3. Implementation — Static analysis can be implemented as early in the software development lifecycle (SDLC) as you have code to scan, it will give more time to fix the issues discovered by the tool. The best thing of static analysis is that it can detect the exact line of code that’s been found to be problematic.
There are so many Static code analysis tools are available to ease our work but to choose good tools among them is really a challenging task. I have done some research and providing you the list of top 10 static code analysis tools:-

1. VisualCodeGrepper

static-code-analysis-tool-visualcodegrepper
Visualcodegreeper is an open source automated code security review tool which works with C++, C#, VB, PHP, Java and PL/SQL to track the insecurities and different issues in the code. This tool rapidly review and depicts in detail the issues it discovers, offering a simple to use interface. It allows custom configurations of queries and it’s updated regularly since its creation (2012).
2. Coverity

static-code-analysis-tool-coverity

Coverity is also an open source static code analysis tool which supports C, C++, C#, Objective-C, Java, Javascript, node.JS, Ruby, PHP & Python. It is an excellent static analysis product with support of 100 compilers & detailed and clear description of the code issues you can use it in your desktop environment to quickly find and resolve the errors before checking in the code.

3. Veracode

static-code-analysis-tool-veracode

Veracode is also one of the best static code analysis tool which can find security flaws in application binary code – compiled or “byte” code even when the consideration of source code is not available. Veracode supports multi-languages which includes .NET (C#, ASP.NET, VB.NET), Java (Java SE, Java EE, JSP), C/C++, JavaScript (including AngularJS, Node.js, and jQuery), Python, PHP, Ruby on Rails, ColdFusion, Classic ASP, including mobile applications on the iOS and Android platforms and written in JavaScript cross platform frameworks.

4. YASCA

static-code-analysis-tool-yasca

“Yet Another Source Code Analyzer (YASCA)” is an open source stactic code analysis tool which supports HTML, Java, JavaScript, .NET, COBOL, PHP, ColdFusion, ASP, C/C++ and some other languages. It is an easy to extend and a flexible tool which can integrate with variety of other tools which includes CppCheck, Pixy, RATS, PHPLint, JavaScript Lint, JLint, FindBugs and various others.
5. Cppcheck

static-code-analysis-tool-cppcheck

Cppcheck is an open source static code analysis tool for C/C++. Cppcheck basically identifies the sorts of bugs that the compilers regularly don’t recognize. The objective is to identify just genuine mistakes in the code. It provides both interface command line mode and graphical user interface (GUI) mode and has possiblitites for environment integration. Some of them are Eclipse, Hudson, Jenkins, Visual Studio.

6. Clang

 

static-code-analysis-tool-clang
Clang is also one of the best static code analysis tool for C, C++ and objective-C. This analyzer can be run either as standalone tool or within Xcode. It is an open source tool and a part of the clang project. It utilizes the clank library, hence forming a reusable component and can be utilized by multiple clients.

7. RIPS

 

static-code-analysis-tool-rips
RIPS is a static code analyzer tool to detect different types for security vulnerabilities in PHP codes. RIPS also provide integrated code audit framework for manual analysis. It is an open source tool too and can be controlled via web interface.
8. Flawfinder
static-code-analysis-tool-flawfinder
Flawfinder is also one of the best static analysis tool for C/C++. This tool is easy to use and wel designed. It reports possible security vulnerabilities sorted by risk level. It is an open source tool written in python and use command line interface.
9. DevBug
static-code-analysis-tool-devbug
DevBug is an online PHP static code analyser which is very easy to use and written on Javascript. It was intended to make essential PHP Static Code Analysis accessible on the web, to raise security mindfulness and to incorporate SCA into the development procedure. This analyser tool is also available in open source.

10. SonarQube

 

static-code-analysis-tool-devbug
SonarQube is one of the best and well known open source web based static code analysis tool, it can scan projects written in many different programming languages including  ABAP, Android (Java), C, C++, CSS, Objective-C, COBOL, C#, Flex, Forms, Groovy, Java, JavaScript, Natural, PHP, PL/SQL, Swift, Visual Basic 6, Web, XML, Python and also allows a number of plug ins. What makes SonarQube really stand out is that It provides metrics about your code which will to help you to take the right decision and translates these non-descript values to real business values such as risk and technical debt.
So, above we mentioned top selective static code analysis tools which can be helpful, but if you think this lists should contain some other tools than feel free to share in comment box.
Tagged : / / / / / / / / / / / / / /

Chef Code Analysis using Foodcritic | Foodcritic Tutorial

chef-code-analysis-using-foodcritic

What is Foodcritic? Foodcritic is a static linting tool that analyzes all of the Ruby code that is authored in a cookbook against a number of rules, and then returns a list of violations. In another word, Foodcritic is a helpful lint tool you can use to check your Chef cookbooks for common problems.

We use Foodcritic to check cookbooks for common problems:
Style
Correctness
Syntax
Best practices
Common mistakes
Deprecations

Foodcritic does not
Foodcritic does not validate the intention of a recipe, rather it evaluates the structure of the code, and helps enforce specific behavior, detect portability of recipes, identify potential run-time failures, and spot common anti-patterns.

When Foodcritic returns a violation, this does not automatically mean the code needs to be changed. It is important to first understand the intention of the rule before making the changes it suggests.

Foodcritic has two goals:

To make it easier to flag problems in your Chef cookbooks that will cause Chef to blow up when you attempt to converge. This is about faster feedback. If you automate checks for common problems you can save a lot of time.

To encourage discussion within the Chef community on the more subjective stuff – what does a good cookbook look like? Opscode have avoided being overly prescriptive which by and large I think is a good thing. Having a set of rules to base discussion on helps drive out what we as a community think is good style.

Foodcritic built-in Rules
It comes with 47 built-in rules that identify problems ranging from simple style inconsistencies to difficult to diagnose issues that will hurt in production. If you want to see the list of rules, please navigate the url as below;
http://www.foodcritic.io/

Prerequisites
Foodcritic runs on Ruby (MRI) 1.9.2+ which depending on your workstation setup may be a more recent version of Ruby than you have installed. The Ruby Version Manager (RVM) is a popular choice for running multiple versions of ruby on the same workstation, so you can try foodcritic out without running the risk of damaging your main install

Foodcritic installation

Method 1
Install RVM as non-root user

$ sudo /etc/init.d/iptables stop OR sudo start ufw

$ curl -s raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer | bash -s stable
OR
$ sudo bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer )
OR
$ curl -s raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer | sudo bash -s stable
OR
$ gpg –keyserver hkp://keys.gnupg.net –recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
OR
$ command curl -sSL https://rvm.io/mpapis.asc | gpg –import –

$ rvm get stable
$ rvm install ruby-2.2.3
$ gem install foodcritic

Method 2
Install ruby

$ sudo apt-get install ruby-2.2.3 (Ubantu)
$ sudo yum install ruby-2.2.3 (rhel)

Install foodcritic
> gem install foodcritic

Method 3
Alternatively install ChefDK which already includes foodcritic: https://downloads.getchef.com/chef-dk/

How to run Foodcritic?
You should now find you have a foodcritic command on your PATH. Run foodcritic to see what arguments it supports:

foodcritic [cookbook_path]
-r, –[no-]repl Drop into a REPL for interactive rule editing.
-t, –tags TAGS Only check against rules with the specified tags.
-f, –epic-fail TAGS Fail the build if any of the specified tags are matched.
-C, –[no-]context Show lines matched against rather than the default summary.
-I, –include PATH Additional rule file path(s) to load.
-S, –search-grammar PATH Specify grammar to use when validating search syntax.
-V, –version Display version.

How to setup Foodcritic with Jenkins

Configuring Jenkins to run foodcritic
To manually add a new job to Jenkins to check your cookbooks with foodcritic do the following:

  1. Ensure you have Ruby 1.9.2+ and the foodcritic gem installed on the box running Jenkins.
  2. You’ll probably need to install the Git plugin. In Jenkins select “Manage Jenkins” -> “Manage Plugins”. Select the “Available” tab. Check the checkbox next to the Git Plugin and click the “Install without restart” button.
  3. In Jenkins select “New Job”. Enter a name for the job “my-cookbook”, select “Build a free-style software project” and click “OK”.
  4. On the resulting page select “Git” under “Source Code Management” and enter the URL for your repo.
  5. Check the checkbox “Poll SCM” under “Build Triggers”.
  6. Click “Add Build Step” -> “Execute shell” under “Build”. This is where we will call foodcritic.
  7. Assuming you are using rvm enter the following as the command:
  8. #!/usr/bin/env rvm-shell 1.9.3
    foodcritic .
  9. Click “Save”.
  10. Cool, we’ve created your new job. Now lets see if it works. Click “Build Now” on the left-hand side.
  11. You can click the build progress bar to be taken directly to the console output.
  12. After a moment you should see that the build has been successful and foodcritic warnings (if any) are shown in your console output.
  13. Yes, for maximum goodness you should be automating all this with Chef. 🙂
  14. For more information refer to the instructions for building a “free-style software project” here:
    https://wiki.jenkins-ci.org/display/JENKINS/Building+a+software+project
  15. See also this blog post about rvm-shell which ensures you have the right version of Ruby loaded when trying to build with foodcritic:
    http://blog.ninjahideout.com/posts/rvm-improved-support-for-hudson

Failing the build
The above is a start, but we’d also like to fail the build if there are any warnings that might stop the cookbook from working.

CI is only useful if people will act on it. Lets start by only failing the build when there is a correctness problem that would likely break our Chef run. We’ll continue to have the other warnings available for reference in the console log but only correctness issues will fail the build.

Select the “my-cookbook” job in Jenkins and click “Configure”.

Scroll down to our “Execute shell” command and change it to look like the following:

#!/usr/bin/env rvm-shell 1.9.3
foodcritic -f correctness .
Click “Save” and then “Build Now”.

More complex expressions
Foodcritic supports more complex expressions with the standard Cucumber tag syntax. For example:

#!/usr/bin/env rvm-shell 1.9.3
foodcritic -f any -f ~FC014 .
Here we use any to fail the build on any warning, but then use the tilde ~ to exclude FC014. The build will fail on any warning raised, except FC014.

You can find more detail on Cucumber tag expressions at the Cucumber wiki:

https://github.com/cucumber/cucumber/wiki/Tags

Tracking warnings over time
The Jenkins Warnings plugin can be configured to understand foodcritic output and track your cookbook warnings over time.

You’ll need to install the Warnings plugin. In Jenkins select “Manage Jenkins” -> “Manage Plugins”. Select the “Available” tab. Check the checkbox next to the Warnings Plugin and click the “Install without restart” button.

From “Manage Jenkins” select “Configure System”. Scroll down to the “Compiler Warnings” section and click the “Add” button next to “Parsers”.

Enter “Foodcritic” in the Name field.

Enter the following regex in the “Regular Expression” field:

^(FC[0-9]+): (.*): ([^:]+):([0-9]+)$

Enter the following Groovy script into the “Mapping Script” field:

import hudson.plugins.warnings.parser.Warning

String fileName = matcher.group(3)
String lineNumber = matcher.group(4)
String category = matcher.group(1)
String message = matcher.group(2)

return new Warning(fileName, Integer.parseInt(lineNumber), “Chef Lint Warning”, category, message);

To test the match, enter the following example message in the “Example Log Message” field:

FC001: Use strings in preference to symbols to access node attributes: ./recipes/innostore.rb:30
Click in the “Mapping Script” field and you should see the following appear below the Example Log Message:

One warning found
file name: ./recipes/innostore.rb
line number: 30
priority: Normal Priority
category: FC001
type: Chef Lint Warning
message: Use strings in prefe[…]ols to access node attributes
Cool, it’s parsed our example message successfully. Click “Save” to save the parser.

Select the “my-cookbook” job in Jenkins and click “Configure”.

Check the checkbox next to “Scan for compiler warnings” underneath “Post-build Actions”.

Click the “Add” button next to “Scan console log” and select our “Foodcritic” parser from the drop-down list.

Click the “Advanced…” button and check the “Run always” checkbox.

Click “Save” and then “Build Now”.

Add the bottom of the console log you should see something similar to this:

[WARNINGS] Parsing warnings in console log with parsers [Foodcritic]
[WARNINGS] Foodcritic : Found 48 warnings.
Click “Back to Project”. Once you have built the project a couple of times the warnings trend will appear here.

Reference:
http://acrmp.github.io/foodcritic/
https://docs.chef.io/foodcritic.html
http://www.foodcritic.io/
https://atom.io/packages/linter-foodcritic
http://www.slideshare.net/harthoover/rapid-chef-development-with-berkshelf-testkitchen-and-foodcritic

Tagged : / / / / / / / / / / /

Source code analysis tools: Evaluation criteria

code-analysis-tools-evaluation-criteria

Source code analysis tools: Evaluation criteria

Support for the programming languages you use. Some companies support mobile devices, while others concentrate on enterprise languages like Java, .Net, C, C++ and even Cobol.

Good bug-finding performance, using a proof of concept assessment. Hint: Use an older build of code you had issues with and see how well the product catches bugs you had to find manually. Look for both thoroughness and accuracy. Fewer false positives means less manual work.

Internal knowledge bases that provide descriptions of vulnerabilities and remediation information. Test for easy access and cross-referencing to discovered findings.

Tight integration with your development platforms. Long-term, you’ll likely want developers to incorporate security analysis into their daily routines.

A robust finding-suppression mechanism to prevent false positives from reoccurring once you’ve verified them as a non-issue.

Ability to easily define additional rules so the tool can enforce internal coding policies.

A centralized reporting component if you have a large team of developers and managers who want access to findings, trending and overview reporting

Tagged : / / / / / / / / / / / / / / / / / /

How to Differentiate Dynamic code analysis and Static code analysis?

static-dynamic-code-analysis-difference

Difference between dynamic code analysis and static code analysis

Static analysis is the testing and evaluation of an application by examining the code without executing the application whereas Dynamic analysis is the testing and evaluation of an application during runtime.

Many software defects that cause memory and threading errors can be detected both dynamically and statically. The two approaches are complementary because no single approach can find every error.

The primary advantage of dynamic analysis: It reveals subtle defects or vulnerabilities whose cause is too complex to be discovered by static analysis. Dynamic analysis can play a role in security assurance, but its primary goal is finding and debugging errors.

Level of in-depth review

The key difference between a static and dynamic code analyser is the how in-depth the code review

process is. By default, static code analysis combs through every single line of source code to find flaws and errors. For dynamic analysis, the lines of code that get reviewed depend upon which lines of source code are activated during the testing process. Unless a line of code is interacted with, the dynamic analysis tool will ignore it and continue checking active codes for flaws. As a result, dynamic analysis is a lot quicker since it is able to review code on the fly and generates real-time data. However, static code analysis provides peace of mind that each and every line of source code has been thoroughly inspected. It may take longer, but static code analysis runs in the background and is crucial for creating a flawless web application.

 

Catching errors early and making recommendations

The primary advantage of static analysis: It examines all possible execution paths and variable values, not just those invoked during execution. Thus static analysis can reveal errors that may not manifest themselves until weeks, months or years after release. This aspect of static analysis is especially valuable in security assurance, because security attacks often exercise an application in unforeseen and untested ways.

As mentioned before, dynamic analysis reviews codes during the testing process and generates real-time results. While it is great for fine-tuning the user experience, it has one major drawback: any errors highlighted by dynamic code analysis tools requires developers to go all the way back to the source code, make changes to the code itself and then make changes to everything that has been modified as a result of changing the source code. This is a very time consuming and expensive process; one that companies and developers like to avoid at all costs. Static code analysis tools highlight any errors immediately and allow developers to makes changes before proceeding any further. Moreover, static code analysis tools are more feature-packed than their dynamic counterparts. One important feature is the number of errors it can detect and the recommendations it can make to fix that error. If configured, static code analysers can automatically make the required changes and let developers know what changes have been made.

 

Cost of code analysis tools

Just like any other business, software application companies have to find a fine balance between application costs and profit margins. With respect to price, static code analysis tools are always cheaper than dynamic analysers. Moreover, having a dynamic code analyser requires a company to hire professionals trained in the use of dynamic analysis tools. A static code analysis tool can be used by any web developer with ease, thus guaranteeing that it won’t turn out to be a long-term expenditure.

Static code analysers are absolutely essential for application developers, whereas dynamic code analysers can only be used in conjunction with static analysis tools.

 

Tagged : / / / / / / / / / / /

Dynamic code analysis VS Static code analysis

dynamic-code-analysis-vs-static-code-analysis

Difference between dynamic code analysis and static code analysis

Static analysis is the testing and evaluation of an application by examining the code without executing the application whereas Dynamic analysis is the testing and evaluation of an application during runtime.

Many software defects that cause memory and threading errors can be detected both dynamically and statically. The two approaches are complementary because no single approach can find every error.

The primary advantage of dynamic analysis: It reveals subtle defects or vulnerabilities whose cause is too complex to be discovered by static analysis. Dynamic analysis can play a role in security assurance, but its primary goal is finding and debugging errors.

Level of in-depth review

The key difference between a static and dynamic code analyser is the how in-depth the code review

process is. By default, static code analysis combs through every single line of source code to find flaws and errors. For dynamic analysis, the lines of code that get reviewed depend upon which lines of source code are activated during the testing process. Unless a line of code is interacted with, the dynamic analysis tool will ignore it and continue checking active codes for flaws. As a result, dynamic analysis is a lot quicker since it is able to review code on the fly and generates real-time data. However, static code analysis provides peace of mind that each and every line of source code has been thoroughly inspected. It may take longer, but static code analysis runs in the background and is crucial for creating a flawless web application.

 

Catching errors early and making recommendations

The primary advantage of static analysis: It examines all possible execution paths and variable values, not just those invoked during execution. Thus static analysis can reveal errors that may not manifest themselves until weeks, months or years after release. This aspect of static analysis is especially valuable in security assurance, because security attacks often exercise an application in unforeseen and untested ways.

As mentioned before, dynamic analysis reviews codes during the testing process and generates real-time results. While it is great for fine-tuning the user experience, it has one major drawback: any errors highlighted by dynamic code analysis tools requires developers to go all the way back to the source code, make changes to the code itself and then make changes to everything that has been modified as a result of changing the source code. This is a very time consuming and expensive process; one that companies and developers like to avoid at all costs. Static code analysis tools highlight any errors immediately and allow developers to makes changes before proceeding any further. Moreover, static code analysis tools are more feature-packed than their dynamic counterparts. One important feature is the number of errors it can detect and the recommendations it can make to fix that error. If configured, static code analysers can automatically make the required changes and let developers know what changes have been made.

 

Cost of code analysis tools

Just like any other business, software application companies have to find a fine balance between application costs and profit margins. With respect to price, static code analysis tools are always cheaper than dynamic analysers. Moreover, having a dynamic code analyser requires a company to hire professionals trained in the use of dynamic analysis tools. A static code analysis tool can be used by any web developer with ease, thus guaranteeing that it won’t turn out to be a long-term expenditure.

Static code analysers are absolutely essential for application developers, whereas dynamic code analysers can only be used in conjunction with static analysis tools.

 

Tagged : / / / / / / / / / / / / / / / /

Difference between dynamic code analysis and static code analysis

difference-dynamic-code-analysis-and-static-code-analysis

Difference between dynamic code analysis and static code analysis

Static analysis is the testing and evaluation of an application by examining the code without executing the application whereas Dynamic analysis is the testing and evaluation of an application during runtime.

Many software defects that cause memory and threading errors can be detected both dynamically and statically. The two approaches are complementary because no single approach can find every error.

The primary advantage of dynamic analysis: It reveals subtle defects or vulnerabilities whose cause is too complex to be discovered by static analysis. Dynamic analysis can play a role in security assurance, but its primary goal is finding and debugging errors.

Level of in-depth review

The key difference between a static and dynamic code analyser is the how in-depth the code review

process is. By default, static code analysis combs through every single line of source code to find flaws and errors. For dynamic analysis, the lines of code that get reviewed depend upon which lines of source code are activated during the testing process. Unless a line of code is interacted with, the dynamic analysis tool will ignore it and continue checking active codes for flaws. As a result, dynamic analysis is a lot quicker since it is able to review code on the fly and generates real-time data. However, static code analysis provides peace of mind that each and every line of source code has been thoroughly inspected. It may take longer, but static code analysis runs in the background and is crucial for creating a flawless web application.

 

Catching errors early and making recommendations

The primary advantage of static analysis: It examines all possible execution paths and variable values, not just those invoked during execution. Thus static analysis can reveal errors that may not manifest themselves until weeks, months or years after release. This aspect of static analysis is especially valuable in security assurance, because security attacks often exercise an application in unforeseen and untested ways.

As mentioned before, dynamic analysis reviews codes during the testing process and generates real-time results. While it is great for fine-tuning the user experience, it has one major drawback: any errors highlighted by dynamic code analysis tools requires developers to go all the way back to the source code, make changes to the code itself and then make changes to everything that has been modified as a result of changing the source code. This is a very time consuming and expensive process; one that companies and developers like to avoid at all costs. Static code analysis tools highlight any errors immediately and allow developers to makes changes before proceeding any further. Moreover, static code analysis tools are more feature-packed than their dynamic counterparts. One important feature is the number of errors it can detect and the recommendations it can make to fix that error. If configured, static code analysers can automatically make the required changes and let developers know what changes have been made.

 

Cost of code analysis tools

Just like any other business, software application companies have to find a fine balance between application costs and profit margins. With respect to price, static code analysis tools are always cheaper than dynamic analysers. Moreover, having a dynamic code analyser requires a company to hire professionals trained in the use of dynamic analysis tools. A static code analysis tool can be used by any web developer with ease, thus guaranteeing that it won’t turn out to be a long-term expenditure.

Static code analysers are absolutely essential for application developers, whereas dynamic code analysers can only be used in conjunction with static analysis tools.

 

Tagged : / / / / / / / / / / / / /

Static vs dynamic code analysis: Advantages and Disadvantages

static-vs-dynamic-code-analysis-advantages-and-disadvantages

What are the advantages and limitations of static and dynamic software code analysis? Maj. Michael Kleffman of the Air Force’s Application Software Assurance Center of Excellence spelled it out.

Static code analysis advantages:

  1. It can find weaknesses in the code at the exact location.
  2. It can be conducted by trained software assurance developers who fully understand the code.
  3. It allows a quicker turn around for fixes.
  4. It is relatively fast if automated tools are used.
  5. Automated tools can scan the entire code base.
  6. Automated tools can provide mitigation recommendations, reducing the research time.
  7. It permits weaknesses to be found earlier in the development life cycle, reducing the cost to fix.

Static code analysis limitations:

  1. It is time consuming if conducted manually.
  2. Automated tools do not support all programming languages.
  3. Automated tools produce false positives and false negatives.
  4. There are not enough trained personnel to thoroughly conduct static code analysis.
  5. Automated tools can provide a false sense of security that everything is being addressed.
  6. Automated tools only as good as the rules they are using to scan with.
  7. It does not find vulnerabilities introduced in the runtime environment.

Dynamic code analysis advantages:

  1. It identifies vulnerabilities in a runtime environment.
  2. Automated tools provide flexibility on what to scan for.
  3. It allows for analysis of applications in which you do not have access to the actual code.
  4. It identifies vulnerabilities that might have been false negatives in the static code analysis.
  5. It permits you to validate static code analysis findings.
  6. It can be conducted against any application.

Dynamic code analysis limitations:

  1. Automated tools provide a false sense of security that everything is being addressed.
  2. Automated tools produce false positives and false negatives.
  3. Automated tools are only as good as the rules they are using to scan with.
  4. There are not enough trained personnel to thoroughly conduct dynamic code analysis [as with static analysis].
  5. It is more difficult to trace the vulnerability back to the exact location in the code, taking longer to fix the problem.
Tagged : / / / / / / / / / / / / / / /