Tuesday, February 26, 2019

Tutorial 03 – Industry practices and tools 2

1. Discuss the importance of maintaining the quality of the code, explaining the different aspects of the code quality 

When you're making code in a project you will have to make sure others can comprehend what you're typing, someone who's less competent in programming may not understand in your group so you can just make the code usable so everyone can check and use the code.

So to conclude, the quality of code is as important to a programmer as the quality of food is to a chef, because you'll have others testing and tasting the code, others consuming and using it and etc. so you must make it have high quality code that can be understood by other systems and humans alike and also make it so that later down the line it is easy to fix and maintain 

  • Readability, consistency — how easy it is to read and understand sections of the code; this includes code clarity, simplicity, and documentation.
  • Predictability, reliability, and robustness — software behavior should be predictable, and not prone to hidden bugs.
  • Maintainability and extensibility— fixing, updating and improving software should be as simple as possible, not inherently complex.

Poor quality code can be usually caused by:
  • Lack of (or insufficient) coding style/standards.
  • No / poor documentation.
  • Poorly designed architecture (with no separation of responsibilities, as in MVC).
  • High method complexity
In this code 
  1. There is no function documentation, no comment lines, and no apparent coding standard is followed (seen, for example, in the usage of curly brackets and empty lines).
  2. The complexity is relatively high due to the number of different actions and processes (DB queries, view/output, and business logic), multiple nesting levels.
  3. There is an inconsistency in the ways to perform output and variable interpolation.

In this code 
  1. The code is simple and self-explanatory.
  2. Different logic sections are separated by empty lines.
  3. There are few nesting/indentation levels, with early return statements.
  4. There are proper design considerations (separation of responsibilities by different objects/classes).
  5. Due to the high code quality and clarity, the class/method should be easy to test and maintain, with low effort; the probability of bugs occurring should also be extremely low.
2. Explain different approaches and measurements used to measure the quality of code
  • Weighted Micro Function Points
  • Halstead Complexity Measures 
  • CyclomaticComplexity 
  • Lines of code 
  • Lines of code per method

3. Identify and compare some available tools to maintain the code quality 
SonarQube
 It is an open source web-based tool, extending its coverage to more than 20 languages, and also allows a number of plugins 

Coverity

Coverity Scan is an open source cloud-based tool. It works for projects written using C, C++, Java C# or JavaScript. This tool provides a very detailed and clear description of the issues which helps in faster resolution. A good choice if you are looking for an open source tool

Code Compare

CodCode Compare – is a file and folder comparison and merge tool. Over 70,000 users actively use Code Compare while resolving merge conflicts and deploying source code changes. Code Compare is a free compare tool designed to compare and merge differing files and folders. Code Compare integrates with all popular source control systems: TFS, SVN, Git, Mercurial, and Perforce. Code Compare is shipped both as a standalone file diff tool and a Visual Studio extension.
Key features:

  • Text Comparison and Merging
  • Semantic Source Code Comparison
  • Folder Comparison
  • Visual Studio Integration
  • Version Control Integration and moree Compare

GAMMA
Gamma is an intelligent software analytics platform, developed by Acellere. It supports developers and teams in building higher quality software in less time, by speeding up code reviews.


It automatically prioritizes hotspots in the code and provides clear visualizations. With its multi-vector diagnostic technology, it analyses software from multiple lenses, including software design, and enables companies to manage and improve their software quality transparently.

4. Discuss the need for dependency/package management tools in software development? 
Package managers automate the process of installing, upgrading, configuring, and removing computer programs from an operating system in a consistent manner. A package manager deals with ‘packages’ – distributions of software and data in archive files. Packages contain metadata, such as the software’s name, its purpose, version number, checksum and a list of dependencies necessary for the software to run properly. Upon installation, metadata is stored in a local package database. Package managers typically maintain a database of software dependencies and version information to prevent software mismatches and missing prerequisites. They work closely with software repositories, binary repository managers and app stores.


5. Explain the role of dependency/package management tools in software development
A software package is an archive file containing a computer program as well as necessary metadata for its deployment. The computer program can be in source code that has to be compiled and built first.Package metadata include package description, package version, and dependencies (other packages that need to be installed beforehand).

Package managers are charged with the task of finding, installing, maintaining or uninstalling software packages upon the user's command. Typical functions of a package management system include:

  • Working with file archivers to extract package archives
  • Ensuring the integrity and authenticity of the package by verifying their digital certificates and checksums
  • Looking up, downloading, installing or updating existing software from a software repository or app store
  • Grouping packages by function to reduce user confusion
  • Managing dependencies to ensure a package is installed with all packages it requires, thus avoiding "dependency hell"

6. Compare and contrast different dependency/package management tools used in industry
Maven
 A repository in Maven holds build artifacts and dependencies of varying types.
There are exactly two types of repositories: local and remote. The local repository is a directory on the computer where Maven runs. It caches remote downloads and contains temporary build artifacts that you have not yet released


Yum

Yum is an automatic updater and package installer/remover for rpm systems. It automatically computes dependencies and figures out what things should occur to install packages. It makes it easier to maintain groups of machines without having to manually update each one using rpm. Yum has a plugin interface for adding simple features. Yum can also be used from other python programs via its module inteface.

NuGet

NuGet is the package manager for the Microsoft development platform including .NET. The open-source NuGet client tools provide users with the ability to produce and consume packages in a similar fashion to RedHat's yum, but with a software development focus. The central package repository for NuGet is known as the NuGet Gallery and is used by all package authors and consumers


Chocolatey
Chocolatey is a package manager for Windows. Designed as a decentralized framework for quickly installing applications and tools, it is built on the NuGet infrastructure and uses PowerShell to deliver packages. Chocolatey packages can be used independently, but also integrate with configuration managers like SCCM, Puppet and Chef.

7. What is a build tool? Indicate the significance of using a build tool in large scale software development, distinguishing it from small scale software development 
What does Build Tool mean? 
Build tools are programs that automate the creation of executable applications from source code. Building incorporates compiling, linking and packaging the code into a usable or executable form. In small projects, developers will often manually invoke the build process. This is not practical for larger projects, where it is very hard to keep track of what needs to be built, in what sequence and what dependencies there are in the building process. Using an automation tool allows the build process to be more consistent.

8. Explain the role of build automation in build tools indicating the need for build automation
BUILD AUTOMATION. Build Automation is the process of scripting and automating the retrieval of software code from a repository, compiling it into a binary artifact, executing automated functional tests, and publishing it into a shared and centralized repository
9. Compare and contrast different build tools used in industry 
Ant
Apache’s Ant is an open source Java library and command-line tool used for automating software build processes. It’s primarily used for building Java applications. Created in 2000, Ant is the original build tool in the Java space that’s still being used today. You’ll probably want to include Ivy with it if you want any dependency management capabilities.When to use it: If you want nearly total control over how your build tool runs and are willing to put in the extra effort to get that.Price: free
Pros:

  • XML base means it works well with automatic tools.
  • Once up and running, Ant gives you nearly full control over how things happen.
  • Rich plugin ecosystem opens up a lot of possibilities, and it’s easy to create custom plugins if what you need isn’t available.
  • Solid and extensive documentation.


Cons:

  • XML base means less customization capabilities.
  • Ant makes you do pretty much everything your self, which can be daunting.
  • Build scripts are often very different, which makes understanding other projects difficult.
  • As an old established tool, the community is fairly dead.

Maven
Apache’s Maven is a build automation tool primarily for Java projects, and is the most popular choice for Java developers today by the usage numbers. Unlike Apache Ant, it uses conventions for the build procedure, and only exceptions need to be written down.When to use it: If you want the de facto tool and plugin repository. If you’re running anything unusual with your other tools, Maven will support it. Well suited for large enterprises due to its very fast build speed.

Price: free

Pros:

  • Extensive ecosystem for plugins.
  • Common structure between builds makes understanding other projects easy.
  • Full support for almost any CI, app server, or IDE tool.


Cons:

  • Lots of download requirements for dependencies and plugins.
  • Up and down documentation quality.
  • Community is largely quiet.
  • Customization is weak.
Gradle
Gradle is an open source build automation system. With version 1.0 released in 2012, Gradle aims to “combine the power and flexibility of Ant with the dependency management and conventions of Maven into a more effective way to build.” Its build scripts are written in Groovy, not XML, which creates a host of different advantages and disadvantages compared to Ant or Maven. Despite being a newer tool in this space, it’s seen widespread adoption.When to use it: Gradle is designed for multi-project environments and incremental builds. It’s good if you’re comfortable with Groovy or are willing to get there. It’s also great for personal projects and SMBs.

Price: free

Pros:

  • DSL base means you have a more customizable and streamlined tool.
  • No required build script boilerplate makes for a simpler experience.
  • Excellent documentation and active community. For example, Gradleware is a company designed around facilitating the adoption and use of Gradle through consultancy and other guidance.
  • It’s simple to create custom plugins.


Cons:

  • DSL base means you have a less straightforward and standardized tool.
  • As the new kid on the block, the ecosystem for plugins and the like is less developed.
  • As a newer tool, its support for CI tools and app servers isn’t as fleshed out as Maven or Ant
10. Explain the build life cycle, using an example (java, .net, etc…) 

JAVA
  1. validate: validate the project is correct and all necessary information is available.
  2. compile: compile the source code of the project.
  3. test: test the compiled source code using a suitable unit testing framework. These tests should not require the code be packaged or deployed.
  4. package: take the compiled code and package it in its distributable format, such as a JAR.
  5. integration-test: process and deploy the package if necessary into an environment where integration tests can be run.
  6. verify: run any checks to verify the package is valid and meets quality criteria.
  7. install: install the package into the local repository, for use as a dependency in other projects locally.
  8. deploy: done in an integration or release environment, copies the final package to the remote repository for sharing with other developers and projects.



.NET

  1. Page request: The page request occurs before the page life cycle begins. When the page is requested by a user, ASP.NET determines whether the page needs to be parsed and compiled (therefore beginning the life of a page), or whether a cached version of the page can be sent in response without running the page.
  2. Start: In the start stage, page properties such as Request and Response are set. At this stage, the page also determines whether the request is a postback or a new request and sets the IsPostBack property. The page also sets the UICulture property.
  3. Initialization: During page initialization, controls on the page are available and each control's UniqueID property is set. A master page and themes are also applied to the page if applicable. If the current request is a postback, the postback data has not yet been loaded and control property values have not been restored to the values from view state.
  4. Load: During load, if the current request is a postback, control properties are loaded with information recovered from view state and control state.
  5. Postback event handling: If the request is a postback, control event handlers are called. After that, the Validate method of all validator controls is called, which sets the IsValid property of individual validator controls and of the page. (There is an exception to this sequence: the handler for the event that caused validation is called after validation.)
  6. Rendering: Before rendering, view state is saved for the page and all controls. During the rendering stage, the page calls the Render method for each control, providing a text writer that writes its output to the OutputStream object of the page's Response property.
  7. Unload: The Unload event is raised after the page has been fully rendered, sent to the client, and is ready to be discarded. At this point, page properties such as Response and Request are unloaded and cleanup is performed.

11. What is Maven, a dependency/package management tool or a build tool or something more?

At first glance Maven can appear to be many things, but in a nutshell Maven is an attempt to apply patterns to a project's build infrastructure in order to promote comprehension and productivity by providing a clear path in the use of best practices. Maven is essentially a project management and comprehension tool and as such provides a way to help with managing:
  • Builds
  • Documentation
  • Reporting
  • Dependencies
  • SCMs
  • Releases
  • Distribution
If you want more background information on Maven you can check out The Philosophy of Maven and The History of Maven. Now let's move on to how you, the user, can benefit from using Maven


12. Discuss how Maven uses conventions over configurations, explaining Maven’s approach to manage the configurations 
Convention over Configuration

Maven uses Convention over Configuration, which means developers are not required to create build process themselves.
Developers do not have to mention each and every configuration detail. Maven provides sensible default behavior for projects. When a Maven project is created, Maven creates default project structure. Developer is only required to place files accordingly and he/she need not to define any configuration in pom.xml.

13. Discuss the terms build phases, build life cycle, build profile, and build goal in Maven 

Maven Phase
A Maven phase represents a stage in the Maven build lifecycle. Each phase is responsible for a specific task.

Build Profile
A Build profile is a set of configuration values, which can be used to set or override default values of Maven build. Using a build profile, you can customize build for different environments such as Production v/s Development environments.

Profiles are specified in pom.xml file using its activeProfiles/profiles elements and are triggered in variety of ways. Profiles modify the POM at build time, and are used to give parameters different target environments (for example, the path of the database server in the development, testing, and production environments).

Build Lifecycle 
The Maven build follows a specific life cycle to deploy and distribute the target project.
There are three built-in life cycles:
  • default: the main life cycle as it’s responsible for project deployment
  • clean: to clean the project and remove all files generated by the previous build
  • site: to create the project’s site documentation
Each life cycle consists of a sequence of phases. The default build life cycle consists of 23 phases as it’s the main build lifecycle.
On the other hand, clean life cycle consists of 3 phases, while the site lifecycle is made up of 4 phases

Maven Goal 
Each phase is a sequence of goals, and each goal is responsible for a specific task.
When we run a phase – all goals bound to this phase are executed in order

14. Discuss with examples, how Maven manages dependency/packages and build life cycle
Maven manages dependency/packages
Dependency management is a core feature of Maven. Managing dependencies for multi-module projects and applications that consist of hundreds of modules is possible. Maven helps a great deal in defining, creating, and maintaining reproducible builds with well-defined classpaths and library versions.

Maven build life cycle
A Build Lifecycle is a sequence of tasks we used to build a software. For example, compile, test, test more, package and publish or deploy are all tasks we need to do to build a software.
A Maven build lifecycle is a sequence of phases we need to go through in order to finishing building the software

15. Identify and discuss some other contemporary tools and practices widely used in the software industry 

Wrike
it offers a load of useful features including task management, task prioritization, real-time newsfeed, interactive timeline (Gantt chart), and workload management. These tools help both distributed and co-located project teams to work speedily and efficiently. With this solution, your team can schedule, discuss, and prioritize their tasks, and track progress in real time.

Monday.com
Monday.com is one of the leading collaboration and communication software for teams that syncs all information in a single, accessible hub, empowering agents and team members to make important decisions together. Its standout capability is streamlining contribution, helping teams and departments work and collaborate in the most efficient manner. Monday.com allows you to assemble and display progress data in a logical and understandable manner, enabling team members to keep track of projects and common tasks

ProjectManager
ProjectManager is an award-winning online project management tool designed to provide efficiency in project planning, budgeting, scheduling, execution, and reporting. One of the most trusted project management systems in the market today, this tool can help you successfully implement and complete any small or big projects, with either short- or long-term durations.
This platform essentially focuses and provides effective tools on the three major parts of a project—planning, monitoring, and reporting—plus a wide array of advanced add-on features. With ProjectManager, you can create and manage tasks via the cloud, and allows tasks to be updated by your team members even when they’re on-the-go. It also features real-time dashboards, automated emails, and quick report generation. 

Thursday, February 21, 2019

Tutorial 02 - Industry Practices and Tools 1

01.What is the need for Version control system (VCS)?
Version control is a system that records changes to a file or set of files over time so that you can recall specific versions later.
For the examples in this book, you will use software source code as the files being version controlled, though in reality you can do this with nearly any type of file on a computer.


If you are a graphic or web designer and want to keep every version of an image or layout (which you would most certainly want to), a Version Control System (VCS) is a very wise thing to use.

  • It allows you to revert selected files back to a previous state
  • revert the entire project back to a previous state
  • compare changes over time, see who last modified something that might be causing a problem
  • who introduced an issue and when, and more
  • generally means that if you screw things up or lose files, you can easily recover
2.Differentiate the three models of VCSs, stating their pros and cons
Local Version Control Systems
Local version control system maintains track of files within the local system. This approach is very common and simple. This type is also error prone which means the chances of accidentally writing to the wrong file is higher.



Centralized Version Control Systems
In this approach, all the changes in the files are tracked under the centralized server. The centralized server includes all the information of versioned files, and list of clients that check out files from that central place.
Example : CVS, Subversion, and Perforce
                                    
Distributed Version Control Systems
Distributed version control systems come into picture to overcome the drawback of centralized version control system. The clients completely clone the repository including its full history. If any server dies, any of the client repositories can be copied on to the server which help restore the server.
Every clone is considered as a full backup of all the data

Example : Git, Mercurial, Bazaar or Darcs




3.Git and GitHub, are they same or different? Discuss with facts          
                                                                 
                               Git and GitHub are different. 
Git is a revision control system, a tool to manage your source code history
GitHub is a hosting service for Git repositories
 Git is the tool, GitHub is the service for projects that use Git.



4.Compare and contrast the Git commands, commit and push

Since git is a distributed version control system, the difference is that commit will commit changes to your local repository, where as push will push changes up to a remote repo. git commit record your changes to the local repository. git push update the remote repository with your local changes.

git commit "records changes to the repository" while git push" updates remote refs along with associated objects"

5. Discuss the use of staging area and Git directory
Staging area
Git makes it easier for you to do this by allowing you to specify exactly what changes should be committed. To accomplish this, Git uses an intermediate area called the staging area. You can add files one at a time to the staging area
Git directory
The Git directory is where Git stores the metadata and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer. The working directory is a single checkout of one version of the project

6. Explain the collaboration workflow of Git, with example
Gitflow Workflow is a Git workflow design that was first published and made popular by Vincent Driessen at nvie. The Gitflow Workflow defines a strict branching model designed around the project release. This provides a robust framework for managing larger projects.

7. Discuss the benefits of CDNs 
1. Your Server Load will decrease:
As a result of, strategically placed servers which form the backbone of the network the companies can have an increase in capacity and number of concurrent users that they can handle. Essentially, the content is spread out across several servers, as opposed to offloading them onto one large server.

2. Content Delivery will become faster:
Due to higher reliability, operators can deliver high-quality content with a high level of service, low network server loads, and thus, lower costs. Moreover, jQuery is ubiquitous on the web. There’s a high probability that someone visiting a particular page has already done that in the past using the Google CDN. Therefore, the file has already been cached by the browser and the user won’t need to download again

3. Segmenting your audience becomes easy:
CDNs can deliver different content to different users depending on the kind of device requesting the content. They are capable of detecting the type of mobile devices and can deliver a device-specific version of the content.

4. Storage and Security:
CDNs offer secure storage capacity for content such as videos for enterprises that need it, as well as archiving and enhanced data backup services. CDNs can secure content through Digital Rights Management and limit access through user authentication.



  1. Performance: reduced latency and minimized packet loss
  2. Scalability: automatically scale up for traffic spikes
  3. SEO Improvement: benefit from the Google SEO ranking factor
  4. Reliability: automatic redundancy between edge servers
  5. Lower Costs: save bandwidth with your web host
  6. Security: KeyCDN mitigates DDoS attacks on edge servers
8. How CDNs differ from web hosting servers?
  • Web Hosting is used to host your website on a server and let users access it over the internet. A content delivery network is about speeding up the access/delivery of your website’s assets to those users.
  • Traditional web hosting would deliver 100% of your content to the user. If they are located across the world, the user still must wait for the data to be retrieved from where your web server is located. A CDN takes a majority of your static and dynamic content and serves it from across the globe, decreasing download times. Most times, the closer the CDN server is to the web visitor, the faster assets will load for them.
  • Web Hosting normally refers to one server. A content delivery network refers to a global network of edge servers which distributes your content from a multi-host environment.


9. Identify free and commercial CDNs

Commercial CDNs
Many large websites use commercial CDNs like Akamai Technologies to cache their web pages around the world. A website that uses a commercial CDN works the same way. The first time a page is requested, by anyone, it is built from the web server. But then it is also cached on the CDN server. Then when another customer comes to that same page, first the CDN is checked to determine if the cache is up-to-date. If it is, the CDN delivers it, otherwise, it requests it from the server again and caches that copy.
A commercial CDN is a very useful tool for a large website that gets millions of page views, but it might not be cost effective for smaller websites.
  • Chrome Frame
  • Dojo Toolkit
  • Ext JS
  • jQuery
  • jQuery UI
  • MooTools
  • Prototype

Free SDNs
We use a lot of open source software in our own projects and we also believe it is important to give back to the community to help make the web a better, faster, and more secure place. While there are a number of fantastic premium CDN solutions you can choose from there are also a lot of great free CDNs (open source) you can utilize to help decrease the costs on your next project. Most likely you are already using some of them without even knowing it. Check out some of the free CDNs below.
  • Google CDN
  • Microsoft Ajax CDN
  • Yandex CDN
  • jsDelivr
  • cdnjs
  • jQuery CDN

10. Discuss the requirements for virtualization
Virtualization separates the backend level and user level for creation of a seamless environment between the two. Virtualization is used for deployment of models of cloud computing services including Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) among others.

11. Discuss and compare the pros and cons of different virtualization techniques in different levels
Guest Operating System Virtualization

Guest OS virtualization is perhaps the easiest concept to understand. In this scenario the physical host computer system runs a standard unmodified operating system such as Windows, Linux, Unix or MacOS X. Running on this operating system is a virtualization application which executes in much the same way as any other application such as a word processor or spreadsheet would run on the system.

Shared Kernel Virtualization

Shared kernel virtualization (also known as system level or operating system virtualization) takes advantage of the architectural design of Linux and UNIX based operating systems. In order to understand how shared kernel virtualization works it helps to first understand the two main components of Linux or UNIX operating systems. At the core of the operating system is the kernel. The kernel, in simple terms, handles all the interactions between the operating system and the physical hardware. The second key component is the root file system which contains all the libraries, files and utilities necessary for the operating system to function. Under shared kernel virtualization the virtual guest systems each have their own root file system but share the kernel of the host operating system.

Kernel Level Virtualization

Under kernel level virtualization the host operating system runs on a specially modified kernel which contains extensions designed to manage and control multiple virtual machines each containing a guest operating system. Unlike shared kernel virtualization each guest runs its own kernel, although similar restrictions apply in that the guest operating systems must have been compiled for the same hardware as the kernel in which they are running. Examples of kernel level virtualization technologies include User Mode Linux (UML) and Kernel-based Virtual Machine (KVM).

12. Identify popular implementations and available tools for each level of visualization 
Data visualization's central role in advanced analytics applications includes uses in planning and developing predictive models as well as reporting on the analytical results they produce.

13. What is the hypervisor and what is the role of it
Hypervisor 
A hypervisor is a hardware virtualization technique that allows multiple guest operating systems (OS) to run on a single host system at the same time. The guest OS shares the hardware of the host computer, such that each OS appears to have its own processor, memory and other hardware resources.
A hypervisor is also known as a virtual machine manager (VMM).

Hypervisor. ... A computer on which a hypervisorruns one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems.
14. How does the emulation is different from VMs?

Virtualization vs. Emulation

Virtualization is a technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. This includes splitting a single physical infrastructure into multiple virtual servers; letting it appear as through each virtual machine is running on its own dedicated hardware and allowing each of them to be rebooted independently

Emulation

Emulation is what we do to imitate the behavior of another program or device. It’s like a concept of a sandbox, allowing you to replicate the behaviors and characteristics of a particular software or program on hardware not designed for them.


Virtualization

Virtualization is a technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. This includes splitting a single physical infrastructure into multiple virtual servers; letting it appear as through each virtual machine is running on its own dedicated hardware and allowing each of them to be rebooted independently.

15. Compare and contrast the VMs and containers/dockers, indicating their advantages and disadvantages 

Both VMs and containers can help get the most out of available computer hardware and software resources. Containers are the new kids on the block, but VMs have been, and continue to be, tremendously popular in data centers of all sizes.

What are VMs?
A virtual machine (VM) is an emulation of a computer system. Put simply, it makes it possible to run what appear to be many separate computers on hardware that is actually one computer.

The operating systems (“OS”) and their applications share hardware resources from a single host server, or from a pool of host servers. Each VM requires its own underlying OS, and the hardware is virtualized. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It sits between the hardware and the virtual machine and is necessary to virtualize the server.

Since the advent of affordable virtualization technology and cloud computing services, IT departments large and small have embraced virtual machines (VMs) as a way to lower costs and increase efficiencies
Benefits of VMs
  • All OS resources available to apps
  • Established management tools
  • Established security tools
  • Better known security controls
Popular VM Providers
  • VMware vSphere
  • VirtualBox
  • Xen
  • Hyper-V
  • KVM
What are Containers?
With containers, instead of virtualizing the underlying computer like a virtual machine (VM), just the OS is virtualized.

Containers sit on top of a physical server and its host OS — typically Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code, and means that a server can run multiple workloads with a single operating system installation. Containers are thus exceptionally light — they are only megabytes in size and take just seconds to start. Compared to containers, VMs take minutes to run and are an order of magnitude larger than an equivalent container.

In contrast to VMs, all that a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program. What this means in practice is you can put two to three times as many as applications on a single server with containers than you can with a VM. In addition, with containers you can create a portable, consistent operating environment for development, testing, and deployment.
Types of Containers
Linux Containers (LXC) — The original Linux container technology is Linux Containers, commonly known as LXC. LXC is a Linux operating system level virtualization method for running multiple isolated Linux systems on a single host.

Docker — Docker started as a project to build single-application LXC containers, introducing several changes to LXC that make containers more portable and flexible to use. It later morphed into its own container runtime environment. At a high level, Docker is a Linux utility that can efficiently create, ship, and run containers.

Benefits of Containers
  • Reduced IT management resources
  • Reduced size of snapshots
  • Quicker spinning up apps
  • Reduced & simplified security updates
  • Less code to transfer, migrate, upload workloads
Popular Container Providers
  • Linux Containers
  • LXC
  • LXD
  • CGManager
  • Docker
  • Windows Server Containers









Thursday, February 14, 2019

Tutorial 01 – Introduction to the frameworks

Programming paradigms means a style, or "way" of programming 

1.Compare and contrast declarative and imperative paradigms
Declarative paradigms is a programming paradigm that expresses the logic of a computation without describing its control flow
     1.Don't allow side effects
     2.Sequence of the steps is not crucial
                                     
Declarative paradigms that you know from languages like HTMLXMLCSSSQLPrologHaskellF# and Lisp

Imperative paradigms is a programming paradigm that uses statements that change a program’s state
        1.Allow side effects
        2.Control flow is explicit meaning how the operation took place, stem by step or more naively "First do this then do that". Thus, the order of steps is crucial 
                                    
Imperative paradigm that you know from languages like CC++C#PHPJava and of  course Assembly.

2.Discuss the difference between procedural programming and functional programming

Procedural Programming:

 Procedural programming uses a list of instructions to tell the computer what to do step by step. Procedural programming relies on procedures, also known as routines. A procedure contains a series of computational steps to be carried out. Procedural programming is also referred to as imperative or structured programming
  • The output of a routine does not always have a direct correlation with the input.
  • Everything is done in a specific order.
  • Execution of a routine may have side effects.
  • Tends to emphasize implementing solutions in a linear fashion.

Functional Programming:

Functional programming is an approach to problem solving that treats every computation as a mathematical function. The outputs of a function rely only on the values that are provided as input to the function and don't depend on a particular series of steps that precede the function.
  • Often recursive.
  • Always returns the same output for a given input.
  • Order of evaluation is usually undefined.
  • Must be stateless. i.e. No operation can have side effects.
  • Good fit for parallel execution
  • Tends to emphasize a divide and conquer approach.
  • May have the feature of Lazy Evaluation.
3.Explain the Lambda calculus an Lambda expressions in functional programming

Functional Programming - Lambda Calculus. Lambda calculus is a framework developed by Alonzo Church in 1930s to study computations with functions.Function creation − Church introduced the notation λx.E to denote a function in which 'x' is a formal argument and 'E' is the functional body

4.What is meant by “no side-effects” and “referential transparency” in functional programming

Referential Transparency
A function that returns always the same result for the same input is called a pure function. A pure function therefore is a function with no observable side effects, if there are any side effects on a function the evaluation could return different results even if we invoke it with the same arguments. We can substitute a pure function with its calculated value, for example
def f(x: Int, y: Int) = x + y
for the input f(2, 2) can be replaced by 4, it is like a big lookup table. We can do so because it does not have any side effects. The ability to replace an expression with its calculated value is called referential transparency 

No side-effects
In computer science, a function or expression is said to have a side effect if it modifies some state outside its local environment, that is to say has an observable interaction with the outside world besides returning a value. In functional programming, side effects are rarely used.

5.Discuss the key features of Object Oriented Programming

Encapsulation-  Process of wrapping code and data together into a single unit.
For example, a capsule which is mixed of several medicines..

Abstraction- This is focuses on the essential characteristics of some object, relative to the perspective of the viewer
for example, sending SMS where you type the text and send the message. You don't know the internal processing about the message delivery.

Polymorphism- Mechanism of wrapping the data (variables) and code acting on the data (methods) together as a single unit.

There are two types of polymorphism in Java: compile-time polymorphism and runtime polymorphism. We can perform polymorphism in java by method overloading and method overriding.

6.How the event-driven programming is different from other programming paradigms?

Event-driven programming is a programming paradigm in which the flow of program execution is determined by events 
      For example a user action such as a mouse click, key press, or a message from the operating   system or another program
An event-driven application is designed to detect events as they occur, and then deal with them using an appropriate event-handling procedure.
  • Program waits for event
  • Whenever something happens the program responds and does something


7.Compare and contrast the Compiled languages, Scripting languages, and Markup languages.
  • A markup language is a language which is used to represent structured data.(xml,html)
  • A scripting language is a programming language which is interpreted, rather than compiled, scripting languages represent a subset of all programming languages.(JS,jsp,php)
  • A compiled language is a programming language whose implementations are typically compilers (translators that generate machine code from source code), and not interpreters (step-by-step executors of source code, where no pre-runtime translation takes place)(java, c, c++)

8.Discuss the role of the virtual runtime machines. 
The virtual machine function is a function for the realization of virtual machine environment. This function enables you to create multiple independent virtual machines on one physical machine by visualizing resources such as the CPU, memory, network and disk that are installed on a physical machine.
9.Find how the JS code is executed (What is the runtime? where do you find the interpreter?) 

The source code is passed through a program called a compiler, which translates it into bytecode that the machine understands and can execute. In contrast,JavaScript has no compilation step. Instead, an interpreter in the browser reads over the JavaScript code, interprets each line, and runs it.
10. Explain how the output of an HTML document is rendered, indicating the tools used to display the output
     HTML File
<html>
<body>
<h1>My First Heading</h1>
<p> Hello World</p>
</body>
</html>



     Running an HTML File
  1. Make sure that there is a browser installed on your computer. ...
  2. Find the saved file. ...
  3. Right-click (Windows) or double-click (Mac) the file and select "Open with" from the action menu. ...
  4. View your HTML file in your chosen browser. ...
  5. Alternate method: Run your browser, then press Ctrl-O

Print in webpage
Hello World!

11.Identify different types of CASE tools, Workbenches, and Environments for different types of software systems (web-based systems, mobile systems, IoT systems, etc.).

Types of CASE tools

Classic CASE tools - established software development support tools (e.g. interactive debuggers, compilers, etc.)
Real CASE tools - can be separated into three different categories, depending on where in the development process they are most involved in:
  • Upper - support analysis and design phases
  • Lower - support coding phase
  • Integrated - also known as I-CASE support analysis, design and coding phases
Upper and lower CASE tools are named as such because of where the phases they support are in the Waterfall Model 

Environments for different types of
       software systems               
                                     
  • The main SDLC environments include:
  • The Analysis and Design Environment.
  • The Development Environment.
  • The Common Build Environment.
  • The Testing Environment: This has two components: The Systems Integration Testing Environment. The User Acceptance Testing Environment.
  • The Production Environment.
12.Discuss the difference between framework, library, and plugin, giving some examples. 
Library
In programming, a library is a collection of precompiled routines that a program can use. The routines, sometimes called modules, are stored in object format.Libraries are particularly useful for storing frequently used routines because you do not need to explicitly link them to every program that uses them
            At development time
•Add the library to the project (source code files,
modules, packages, executables, etc.)
•Call the necessary functions/methods using the
given packages/module/classes
            At run time
•The library will be called by the code

Framework
In computer programming, a software framework is an abstraction in which software providing generic functionality can be selectively changed by additional user-written code, thus providing application-specific software. A software framework provides a standard way to build and deploy applications.
             At development time
• Create the structure of the application
• Place your code in necessary places
• You may use the given libraries to write your code
• You can include additional libraries and plugins
             At run time
• The framework will call your code (inverse ofcontrol)
Environments for different types of software systems

Plugin
In computing, a plug-in (or plugin, add-in, addin, add-on, or addon) is a software component that adds a specific feature to an existing computer program. When a program supports plug-ins, it enables customization.
            At development time

•The plugin (source code files, modules, packages, executables, etc.) is placed in the project,

•Apply some configurations using code

            At run time

•The plug-in will be invoked via the configurations





























Tutorial 11 – Client-side development 2 - RiWAs

1. Distinguish the term “Rich Internet Applications” (RIAs) from “Rich Web-based Applications” (RiWAs).  Definition What does Rich Inter...