Thursday, February 21, 2019

Tutorial 02 - Industry Practices and Tools 1

01.What is the need for Version control system (VCS)?
Version control is a system that records changes to a file or set of files over time so that you can recall specific versions later.
For the examples in this book, you will use software source code as the files being version controlled, though in reality you can do this with nearly any type of file on a computer.


If you are a graphic or web designer and want to keep every version of an image or layout (which you would most certainly want to), a Version Control System (VCS) is a very wise thing to use.

  • It allows you to revert selected files back to a previous state
  • revert the entire project back to a previous state
  • compare changes over time, see who last modified something that might be causing a problem
  • who introduced an issue and when, and more
  • generally means that if you screw things up or lose files, you can easily recover
2.Differentiate the three models of VCSs, stating their pros and cons
Local Version Control Systems
Local version control system maintains track of files within the local system. This approach is very common and simple. This type is also error prone which means the chances of accidentally writing to the wrong file is higher.



Centralized Version Control Systems
In this approach, all the changes in the files are tracked under the centralized server. The centralized server includes all the information of versioned files, and list of clients that check out files from that central place.
Example : CVS, Subversion, and Perforce
                                    
Distributed Version Control Systems
Distributed version control systems come into picture to overcome the drawback of centralized version control system. The clients completely clone the repository including its full history. If any server dies, any of the client repositories can be copied on to the server which help restore the server.
Every clone is considered as a full backup of all the data

Example : Git, Mercurial, Bazaar or Darcs




3.Git and GitHub, are they same or different? Discuss with facts          
                                                                 
                               Git and GitHub are different. 
Git is a revision control system, a tool to manage your source code history
GitHub is a hosting service for Git repositories
 Git is the tool, GitHub is the service for projects that use Git.



4.Compare and contrast the Git commands, commit and push

Since git is a distributed version control system, the difference is that commit will commit changes to your local repository, where as push will push changes up to a remote repo. git commit record your changes to the local repository. git push update the remote repository with your local changes.

git commit "records changes to the repository" while git push" updates remote refs along with associated objects"

5. Discuss the use of staging area and Git directory
Staging area
Git makes it easier for you to do this by allowing you to specify exactly what changes should be committed. To accomplish this, Git uses an intermediate area called the staging area. You can add files one at a time to the staging area
Git directory
The Git directory is where Git stores the metadata and object database for your project. This is the most important part of Git, and it is what is copied when you clone a repository from another computer. The working directory is a single checkout of one version of the project

6. Explain the collaboration workflow of Git, with example
Gitflow Workflow is a Git workflow design that was first published and made popular by Vincent Driessen at nvie. The Gitflow Workflow defines a strict branching model designed around the project release. This provides a robust framework for managing larger projects.

7. Discuss the benefits of CDNs 
1. Your Server Load will decrease:
As a result of, strategically placed servers which form the backbone of the network the companies can have an increase in capacity and number of concurrent users that they can handle. Essentially, the content is spread out across several servers, as opposed to offloading them onto one large server.

2. Content Delivery will become faster:
Due to higher reliability, operators can deliver high-quality content with a high level of service, low network server loads, and thus, lower costs. Moreover, jQuery is ubiquitous on the web. There’s a high probability that someone visiting a particular page has already done that in the past using the Google CDN. Therefore, the file has already been cached by the browser and the user won’t need to download again

3. Segmenting your audience becomes easy:
CDNs can deliver different content to different users depending on the kind of device requesting the content. They are capable of detecting the type of mobile devices and can deliver a device-specific version of the content.

4. Storage and Security:
CDNs offer secure storage capacity for content such as videos for enterprises that need it, as well as archiving and enhanced data backup services. CDNs can secure content through Digital Rights Management and limit access through user authentication.



  1. Performance: reduced latency and minimized packet loss
  2. Scalability: automatically scale up for traffic spikes
  3. SEO Improvement: benefit from the Google SEO ranking factor
  4. Reliability: automatic redundancy between edge servers
  5. Lower Costs: save bandwidth with your web host
  6. Security: KeyCDN mitigates DDoS attacks on edge servers
8. How CDNs differ from web hosting servers?
  • Web Hosting is used to host your website on a server and let users access it over the internet. A content delivery network is about speeding up the access/delivery of your website’s assets to those users.
  • Traditional web hosting would deliver 100% of your content to the user. If they are located across the world, the user still must wait for the data to be retrieved from where your web server is located. A CDN takes a majority of your static and dynamic content and serves it from across the globe, decreasing download times. Most times, the closer the CDN server is to the web visitor, the faster assets will load for them.
  • Web Hosting normally refers to one server. A content delivery network refers to a global network of edge servers which distributes your content from a multi-host environment.


9. Identify free and commercial CDNs

Commercial CDNs
Many large websites use commercial CDNs like Akamai Technologies to cache their web pages around the world. A website that uses a commercial CDN works the same way. The first time a page is requested, by anyone, it is built from the web server. But then it is also cached on the CDN server. Then when another customer comes to that same page, first the CDN is checked to determine if the cache is up-to-date. If it is, the CDN delivers it, otherwise, it requests it from the server again and caches that copy.
A commercial CDN is a very useful tool for a large website that gets millions of page views, but it might not be cost effective for smaller websites.
  • Chrome Frame
  • Dojo Toolkit
  • Ext JS
  • jQuery
  • jQuery UI
  • MooTools
  • Prototype

Free SDNs
We use a lot of open source software in our own projects and we also believe it is important to give back to the community to help make the web a better, faster, and more secure place. While there are a number of fantastic premium CDN solutions you can choose from there are also a lot of great free CDNs (open source) you can utilize to help decrease the costs on your next project. Most likely you are already using some of them without even knowing it. Check out some of the free CDNs below.
  • Google CDN
  • Microsoft Ajax CDN
  • Yandex CDN
  • jsDelivr
  • cdnjs
  • jQuery CDN

10. Discuss the requirements for virtualization
Virtualization separates the backend level and user level for creation of a seamless environment between the two. Virtualization is used for deployment of models of cloud computing services including Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) among others.

11. Discuss and compare the pros and cons of different virtualization techniques in different levels
Guest Operating System Virtualization

Guest OS virtualization is perhaps the easiest concept to understand. In this scenario the physical host computer system runs a standard unmodified operating system such as Windows, Linux, Unix or MacOS X. Running on this operating system is a virtualization application which executes in much the same way as any other application such as a word processor or spreadsheet would run on the system.

Shared Kernel Virtualization

Shared kernel virtualization (also known as system level or operating system virtualization) takes advantage of the architectural design of Linux and UNIX based operating systems. In order to understand how shared kernel virtualization works it helps to first understand the two main components of Linux or UNIX operating systems. At the core of the operating system is the kernel. The kernel, in simple terms, handles all the interactions between the operating system and the physical hardware. The second key component is the root file system which contains all the libraries, files and utilities necessary for the operating system to function. Under shared kernel virtualization the virtual guest systems each have their own root file system but share the kernel of the host operating system.

Kernel Level Virtualization

Under kernel level virtualization the host operating system runs on a specially modified kernel which contains extensions designed to manage and control multiple virtual machines each containing a guest operating system. Unlike shared kernel virtualization each guest runs its own kernel, although similar restrictions apply in that the guest operating systems must have been compiled for the same hardware as the kernel in which they are running. Examples of kernel level virtualization technologies include User Mode Linux (UML) and Kernel-based Virtual Machine (KVM).

12. Identify popular implementations and available tools for each level of visualization 
Data visualization's central role in advanced analytics applications includes uses in planning and developing predictive models as well as reporting on the analytical results they produce.

13. What is the hypervisor and what is the role of it
Hypervisor 
A hypervisor is a hardware virtualization technique that allows multiple guest operating systems (OS) to run on a single host system at the same time. The guest OS shares the hardware of the host computer, such that each OS appears to have its own processor, memory and other hardware resources.
A hypervisor is also known as a virtual machine manager (VMM).

Hypervisor. ... A computer on which a hypervisorruns one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems.
14. How does the emulation is different from VMs?

Virtualization vs. Emulation

Virtualization is a technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. This includes splitting a single physical infrastructure into multiple virtual servers; letting it appear as through each virtual machine is running on its own dedicated hardware and allowing each of them to be rebooted independently

Emulation

Emulation is what we do to imitate the behavior of another program or device. It’s like a concept of a sandbox, allowing you to replicate the behaviors and characteristics of a particular software or program on hardware not designed for them.


Virtualization

Virtualization is a technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. This includes splitting a single physical infrastructure into multiple virtual servers; letting it appear as through each virtual machine is running on its own dedicated hardware and allowing each of them to be rebooted independently.

15. Compare and contrast the VMs and containers/dockers, indicating their advantages and disadvantages 

Both VMs and containers can help get the most out of available computer hardware and software resources. Containers are the new kids on the block, but VMs have been, and continue to be, tremendously popular in data centers of all sizes.

What are VMs?
A virtual machine (VM) is an emulation of a computer system. Put simply, it makes it possible to run what appear to be many separate computers on hardware that is actually one computer.

The operating systems (“OS”) and their applications share hardware resources from a single host server, or from a pool of host servers. Each VM requires its own underlying OS, and the hardware is virtualized. A hypervisor, or a virtual machine monitor, is software, firmware, or hardware that creates and runs VMs. It sits between the hardware and the virtual machine and is necessary to virtualize the server.

Since the advent of affordable virtualization technology and cloud computing services, IT departments large and small have embraced virtual machines (VMs) as a way to lower costs and increase efficiencies
Benefits of VMs
  • All OS resources available to apps
  • Established management tools
  • Established security tools
  • Better known security controls
Popular VM Providers
  • VMware vSphere
  • VirtualBox
  • Xen
  • Hyper-V
  • KVM
What are Containers?
With containers, instead of virtualizing the underlying computer like a virtual machine (VM), just the OS is virtualized.

Containers sit on top of a physical server and its host OS — typically Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code, and means that a server can run multiple workloads with a single operating system installation. Containers are thus exceptionally light — they are only megabytes in size and take just seconds to start. Compared to containers, VMs take minutes to run and are an order of magnitude larger than an equivalent container.

In contrast to VMs, all that a container requires is enough of an operating system, supporting programs and libraries, and system resources to run a specific program. What this means in practice is you can put two to three times as many as applications on a single server with containers than you can with a VM. In addition, with containers you can create a portable, consistent operating environment for development, testing, and deployment.
Types of Containers
Linux Containers (LXC) — The original Linux container technology is Linux Containers, commonly known as LXC. LXC is a Linux operating system level virtualization method for running multiple isolated Linux systems on a single host.

Docker — Docker started as a project to build single-application LXC containers, introducing several changes to LXC that make containers more portable and flexible to use. It later morphed into its own container runtime environment. At a high level, Docker is a Linux utility that can efficiently create, ship, and run containers.

Benefits of Containers
  • Reduced IT management resources
  • Reduced size of snapshots
  • Quicker spinning up apps
  • Reduced & simplified security updates
  • Less code to transfer, migrate, upload workloads
Popular Container Providers
  • Linux Containers
  • LXC
  • LXD
  • CGManager
  • Docker
  • Windows Server Containers









No comments:

Post a Comment

Tutorial 11 – Client-side development 2 - RiWAs

1. Distinguish the term “Rich Internet Applications” (RIAs) from “Rich Web-based Applications” (RiWAs).  Definition What does Rich Inter...