Interview Questions: 1. What Is Devops?

October 5, 2022 | Author: Anonymous | Category: N/A
Share Embed Donate


Short Description

Download Interview Questions: 1. What Is Devops?...

Description

 

Interview Questions 1.

What is DevOps?

The first question that you would come across in an Azure DevOps interview would deal with the root element. DevOps De vOps is a culture or paradigm shift that implies the collaboration between development and operations teams in an organization. The union of process, product, and people help DevOps in providing continuous integration and continuous delivery of value to end-users. Basically, DevOps increases the speed of processes for the delivery of applications and software services at higher velocity. The continuous delivery aspect in DevOps also ensures the minimization of the risk factor through collecting stakeholder and end-user feedback. 2.

What are the reasons to use DevOps?

Candidates could easily find this entry as one of the common Azure DevOps interview questions. DevOps helps enterprises deliver smaller features to customers with higher efficiency and speed. The functionalities of DevOps clearly indicate its potential for providing seamless software delivery. Examples of success with DevOps include the names of Google and Amazon. These tech giants could achieve thousands of code deployments every ev ery day alongside delivering the benefits of security, stability, and reliability. 3.

What advantages does DevOps provide?

The response to this answer should focus on two distinct aspects. The benefits of DevOps are evident in the form of technical and business benefits. The technical benefits include continuous software delivery, faster problem solving, and limited complexity of problems. The business benefits of DevOps include faster delivery of features and additional time for adding value to the end product. In a addition, ddition, the business benefits of DevOps are also evident in the improvement of stability in operating environments 4.

Present one example of the use of DevOps in real life?

Various industries are using DevOps, thereby leading to a vast number of use cases that can serve as responses here. However, here is an example of Etsy, a peer-to-peer e-commerce website focusing on handmade or vintage products and supplies. Etsy had to face issues with slow and stressful site updates that led to frequent downtimes.  As a result, millions of sellers on Etsy‘s marketplace lost sales to their competition. Etsy took a step away from the traditional waterfall model towards DevOps. Now, it employs adelivery completely automated deployment along daily. with proven continuous practices leading to over 50 pipeline deployments The best thing

 

about Etsy‘s use case is that it does not experience frequent fre quent disruptions with deployment after adopting DevOps. 5. What are the major areas of DevOps tools? Candidates could face this simple question, among other common Azure DevOps interview questions. The answer implies that automation plays a major role in the implementation of DevOps. Therefore, DevOps tools are highly dominant in the areas of planning, code management, building and testing, and release management. In addition, DevOps tools also have functionalities in deployment and monitoring tasks in the DevOps ecosystem. 6. What are the popular DevOps tools for continuous integration and continuous deployment? The notable DevOps tools for continuous integration include Jenkins, GitLab CI, TeamCity, Bamboo, Codeship, CircleCI, and Travis CI. The popular DevOps tools for continuous deployment include Azure Pipelines for Deployment, Jenkins, Bamboo, DeployBot, Shippable, ElectricFlow, and TeamCity. 7.

What is continuous testing and the ideal DevOps tools for the same?

Candidates could expect to face this entry in frequently-asked Azure DevOps interview questions. First of all, you need to understand that DevOps is not about tools or process improvements. DevOps focuses on people, automation, and culture changes. Therefore, automated testing through writing scripts to execute the testing process automatically enables frequent releases. Many opensource tools for test automation can help in achieving the DevOps objective of continuous testing. Some of the notable DevOps tools for continuous testing are Selenium, JMeter, AntUnit, JUnit, SoapUI, and Cucumber. 8. What is Azure DevOps?  Azure DevOps is the new name for Microsoft Visual Studio Team Services (VSTS). It is known as a promising application lifecycle management tool. Azure DevOps helps in planning a project with the t he help of Agile tools and templates. The other functionalities of Azure DevOps include the management and running of test plans and version control of source code alongside the management of branches. In addition, Azure DevOps also helps in the deployment of a solution across different platforms by leveraging Azure Pipelines. Azure DevOps facilitates continuous integration and continuous deployment for faster and effective deployment. 9. What is the difference between Azure DevOps Services and Azure DevOps Server? Candidates wouldDevOps generally face this as one of theoftricky AzureAzure DevOps questions. Azure Services is entry the cloud service Microsoft withinterview a highly

 

scalable, reliable, and globally available hosted service. On the other hand, DevOps Server is an on-premises offering, built on a SQL Server back end. Enterprises choose the on-premises option when they need their day within their network. Another scenario for choosing on-premises involves the need for accessing SQL Server reporting services integrating effectively with Azure DevOps data and tools. Both Azure DevOps Services and Azure DevOps Server offer similar basic services, albeit with certain added benefits of the former. Here are the additional advantages of  Azure DevOps Services. Services. Simpler server management. Better connectivity with remote sites. Faster access to new and productive features. Transition in focus from capital expenditures on servers and infrastructure towards operational expenditures on subscriptions. 







       

10. Which factors should I consider for choosing one from Azure DevOps Services and Azure DevOps Server?

Candidates could find this entry as one of the advanced Azure DevOps interview questions. Most important of all, you can get follow-up questions regarding each factor in response to this question. The important factors to consider before making the choice of a platform between Azure DevOps Services and Azure DevOps Server are: 

 



 



 



 



 



 



 

11.

Scope and scale data  Authentication requirements Users and groups Management of user access Security and data protection precedents Process customization Reporting What are the different DevOps solution architectures?

You can leverage multiple tools and technologies with Azure for the following DevOps scenarios to design solution architectures. CI/CD for Containers   Java CI/CD using Jenkins and Azure Web Apps   Container CI/CD using Jenkins and Kubernetes on Azure Kubernetes Service   Immutable Infrastructure CI/CD using Jenkins and Terraform on  Azure Virtual Architecture 

 











   

DevTest CI/CD forimage Azurefactory VMs

 



CI/CD for Azure Web Apps  

 

12. What are Azure boards?  Azure Boards is an Azure DevOps service that helps in the management of work in software projects. Azure Boards provide a diverse set of capabilities such as customizable dashboards, integrated reporting, and native support for Kanban and Scrum. The core features of Azure Boards include work items, boards, backlogs, sprints, dashboards, and queries. 13.

What are the important reasons to use Azure Boards?

The applications of Azure Boards and its features are the foremost reasons to choose them. Here is an outline of the prominent reasons to use Azure Boards.

   

Simple to start with an opportunity for scaling as per growth levels The facility of visually interactive tools Ease of customization In-built tools for social communication Flexible information capturing and ample cloud storage capacity

 

Easy to find requirements and facility of notifications regarding

 

Monitoring



 



 



 







changes 

status

and

progress

with

in-built

analytics

and

dashboards 

 



 



 

Integration with MS Office The benefit of extensions and extensibility Opportunity to start without a price

14. What is Azure Repos? Candidates should prepare for basic yet tough t ough Azure DevOps interview questions like this one. Azure Repos is a version control system that helps he lps in managing code and the different versions throughout the development lifecycle. Azure

Repos can help in tracking changes theincode by different Theand detailed record of the history of changes can to help coordinating withteams. the team merge the changes at a later stage. The interesting factor about Azure Repos is the facility of a centralized version control system and a distributed version control system. Git is the distributed version control system in this case. On the other hand, the Team Foundation Version Control (TFVC) is the centralized version control system. 15. What are containers in DevOps, and which container platforms does Azure DevOps support? The container provides an easy approach for packaging software soft ware code, related

configurations, packages, and dependencies in a single project. project . Candidates could find this entry in Azure DevOps De vOps interview questions generally. You can extend the

 

response by stating that multiple containers could run on the same machine and share the operating system with other containers.  As a result, containers could help in faster, consistent, and reliable deployments.  Azure DevOps provides container support for Docker and Asp.Net with containers. In addition, the Azure Kubernetes Services and Azure Service Fabric application with Docker support also provide container support on Azure. 16.

What are Azure pipelines?

This is one of the technical Azure DevOps interview questions for the consideration of candidates. Azure Pipeline is a service on the Azure cloud which you can use for automatically building and testing code project. In addition, it also works effectively with the majority of languages and project types, thereby presenting improvements in the availability of code project to other users. 17.

What are the reasons to use CI and CD and Azure Pipelines?

Implementation of CI and CD pipelines is one of the best approaches for ensuring reliable and quality code. This is one of the important Azure DevOps interview questions that you focusofon. Azure Pipelines an and easy,ensuring secure, their and faster approach for should automation processes to build offer projects availability. In addition, the use of Azure Pipelines for public projects is completely free. On the other hand, using private projects is also cost-effective as you get around 30 hours of pipeline jobs per month for free. In addition, you can also present the following reasons to use Azure pipelines for CI and CD in such Azure DevOps interview questions. 

 



 



 



   



 



18.

Support for any language or platform Deployment to various types of a target simultaneously Integration with Azure deployments Building on Windows, Mac, and Linux machines Integration with GitHub Capability for working with open-source projects What are Azure Test Plans?

Candidates should prepare for Azure DevOps interview questions like this one.  Azure Test Plans are a service with Azure DevOps that provides a browser-based test management solution. The test plans also provide crucial capabilities in exploratory testing, user acceptance testing, and planned manual testing. Azure Test Plans also have a browser extension to provide exploratory testing alongside a collection of feedback from stakeholders.

 

Manual and exploratory testing are important techniques for the t he evaluation of a product or service quality. In addition, Azure Test Plans are also responsible for realizing the focus of DevOps on automated testing. Azure Test Plans helps in assimilating the contributions from developers, testers, product owners, user experience advocates, and managers to the quality of a project. 19.

What is the role of Azure Artifacts?

Candidates could find such Azure DevOps interview questions related to components of Azure DevOps commonly in interviews. Azure Artifacts serves as an extension of Azure DevOps Services and Azure DevOps Server. The service is available pre-installed in Azure DevOps Server 2019, Team Foundation Server (TFS) 2017 and 2018 and Azure DevOps Services. Azure Artifacts bring the concepts of multiple feeds for the first time. Multiple feeds can help in organization and controlling co ntrolling access to packages. Azure  Artifacts help in the creation and sharing of Maven, NuGet, and npm package feeds from private and public sources with teams of varying sizes. Azure Artifacts provides the facility of adding completely integrated package management to your continuous integration/continuous delivery (CI/CD) pipelines in a single click. 20. What should you do to make a NuGet package available to anonymous users outside your organization alongside minimizing the number of publication points?

The solution to this question is the creation of a new feed for the package. Packages hosted in Azure Artifacts find storage in a feed. Setting up permissions on the feed enables sharing packages with higher scalability according to the scenario‘s requirements. requirements. The multiple feeds on Azure Artifacts help in controlling access to packages across four levels of access. The four levels of access are owners, readers, contributors, and collaborators. 21. What recommendations would you provide an application for enabling communication between members of the development team working in in different locations around the world using Azure DevOps?

The foremost criteria for such an application would be the facility of isolation of members of different project teams into different communication channels. In addition, it should also maintain a history of communication in the concerned channels. Furthermore, the application should integrate effectively with Azure DevOps and provide the ability to add external contractors and suppliers to projects. Microsoft Teams offers the right capabilities to address these needs.

 

Classification of different teams allows users to create different channels for organizing communications according to the topic. Every channel could include a few users or even thousands of users. The guest access feature in Microsoft Teams provides the capability for inviting external people to join internal channels for file sharing, messaging, and meetings. The feature helps in providing business-to-business project management. Microsoft Teams also integrates directly with Azure DevOps. 22. Which feature would you use for developing a multi-tier application using Azure App Service web apps as the front end and Azure SQL database as the back end? The application should send the Azure DevOps team an email message in the event e vent of the front end‟s failure to return status code “200. 

 Application Map in Azure Application Insights is the recommended option in this case as it helps in the identification of performance bottlenecks. In addition, it also helps in identifying failure hotspots in different components of the multi-tier applications. Every node on the map provides a representation re presentation of an application component and related dependencies. In addition, it also provides provi des status for health KPI and alerts. 23. What solution would you recommend to improve the quality of code upon discovering many unused variables and empty catch blocks?

The solution is to select ―Run PMD‖ in a Maven build task. PMD is a source code analyzer and identifies common programming errors such as unused variables, unnecessary object creation, and empty code blocks. An Apache Maven PMD Plugin helps in automatically running the PMD code analysis tool on a project‘s source code. The site report provides detailed results about errors in the code. 24. What are the necessary components for integrating Azure DevOps and Bitbucket?

The solution to this question refers to a self-hosted agent and an external Git service connection. GitLab CI/CD is compatible with GitHub and Git server like Bitbucket. Rather than shifting an entire project to GitLab, it is possible to connect external repository to obtain benefits of GitLab CI/CD. 25.

What are Azure DevOps Projects?

 Azure DevOps Projects are an effective option for obtaining a simplified experience for bringing existing code and Git repository for creating CI and CD pipeline to Azure. The use of Azure DevOps Projects is also evident if you select one of the sample applications.

 

  26. List out the core operations of DevOps with application development and with infrastructure? The core operations of DevOps with application development and with infrastructure are:  Application   Codedevelopment building   Code coverage   Unit testing   Packaging   Deployment 









Infrastructure        



Provisioning Configuration Orchestration Deployment

27.

Name the tools that are used for Continuous Testing?

  

For test Automation there are many open source tools are available, following are few names      

Selenium JMeter JUnit

       



AntUnit Cucumber SoapUI Tricentis Tosca

28.

Which tools are useful for Infrastructure Configuration?



    

The most popular tools for Infrastructure Configuration are as follows   



 

 

Chef Puppet Ansible

 

  29.

Mention some important features of Memcached

Memcached offers a wide variety when it comes to feature. To name a few,          



  



Firstly, CAS Tokens Secondly, Callbacks Then, GetDelayed Also, Binary protocol Moreover, Igbinary

30. Explain if there is any possibility to share a single instance of a Memcache between multiple projects?

Yes, there is a possibility to share a single instance of Memcache between multiple projects. Memcache is basically a memory storage space where you can run Memcache on one or more servers. Further, you can also configure your client to speak to a particular set of instances. In this way, you can run two different Memcache processes on the same host and yet they are completely independent 31.

Explain what Dogpile effect is? How can you prevent this effect?

Dogpile effect is a time period that indicates the expiry of the cache, followed by multiple requests hit by the client on the website. The easiest way to prevent this effect is to use a semaphore lock. The semaphore lock starts generating the new value as the cache expires. 32.

Explain Pair Programming with reference to DevOps

Pair programming is basically an engineering practice of Extreme Programming Rules. Under this method, two programmers work on the same system, on the same design/algorithm/code. While one programmer act as a ―driver‘‘, the other acts as an ―observer‖ who continuously monitor the progress of a project to identify problems. Not to mention, the roles can be undone at any point of time without any prior intimation. 33.

Can we move or copy Jenkins from one server to another?

Yes, it is possible to move as well as copy the Jenkins from one server to another. For instance, when younew copy the jobsDoing directory can move be moved the older server to the orthe theJenkins, current server. this can an from

 

installation from one place to another by copying in the corresponding job directory.

34. Can we install Ansible on the controlling machines? Yes, we can install Ansible on the t he controlling machine by using the machine nodes that are managed with the help of SSH. 35.

What is Forking Workflow?

Forking Workflow gives every developer with their service side repositories. As a result, it supports open-source projects. 36.

List out some of the useful plugins in Jenkins.

Below, mentioned are some important Plugins:    

Firstly, Maven 2 project After that, Amazon EC2

       

Moreover, HTML publisher After that, Copy artifact Also, Join Lastly, Green Balls

 





 

37. How is DevOps different from agile methodology? DevOps is a culture that allows the development and the operations team to work together. This results in continuous development, testing, t esting, integration, deployment, and monitoring of the software throughout the lifecycle.

 Agile is a software development methodology that focuses on iterative, incremental, small, and rapid releases of software, along with customer feedback. It addresses gaps and conflicts between the customer and developers.

 

  DevOps addresses gaps and conflicts between the Developers and IT Operations.

38. What are the different phases in DevOps? The various phases of the DevOps lifecycle are as follows:

  Plan - Initially, there should be a plan for the type of application that needs to be developed. Getting a rough picture of the development process is always a good idea.   Code - The application is coded as per the end-user requirements.   Build - Build the application by integrating various codes formed in the previous steps.   Test - This is the most crucial step of the application development. Test the application and rebuild, if necessary.   Integrate - Multiple codes from different programmers are integrated into one.   Deploy - Code is deployed into a cloud environment for further usage. It is ensured that any new changes do not affect the functioning of a high traffic website.   Operate - Operations are performed on the code if required.   Monitor - Application performance is monitored. Changes are made to meet the end-user requirements.  















 

  39.

How will you approach a project that needs to implement DevOps?

The following standard approaches can be used to implement DevOps in a specific project: Stage 1  An assessment of the existing process and implementation for about two to three weeks to identify areas of improvement so that the team can create a road map for the implementation. Stage 2 Create a proof of concept (PoC). Once it is accepted and approved, the team can start on the actual implementation and roll-out of the project plan. Stage 3 The project is now ready for implementing i mplementing DevOps by using version control/integration/testing/deployment/delivery and monitoring followed step by step. By following the proper steps for version control, integration, testing, deployment, delivery, and monitoring, the project is now ready for DevOps implementation. 40. What is the difference between continuous delivery and continuous deployment?  Continuous Delivery  Ensures code can be safely deployed on to production Ensures business applications and services function as expected Delivers every change to a production-like environment through rigorous automated testing

Continuous Deployment  Every change that passes the automated tests is deployed to production automatically Makes software development and the release process faster and more robust There is no explicit approval from a developer and requires a developed culture of monitoring

 

 

41. How does continuous monitoring help you maintain the entire architecture of the system?

Continuous monitoring in DevOps is a process of detecting, identifying, and reporting any faults or threats in the entire infrastructure of the system.   Ensures that all services, applications, and resources are running on the servers properly.   Monitors the status of servers and determines if applications are working correctly or not.   Enables continuous audit, transaction inspection, and controlled monitoring. 





42.

Name three important DevOps KPIs.

The three important KPIs are as follows:   Meantime to failure recovery - This is the average time taken to recover from a failure.



 

  Deployment frequency - The frequency in which the deployment occurs.   Percentage of failed deployments - The number of times the deployment fails.





43. Explain the term "Infrastructure as Code" (IaC) as it relates to configuration management.

  Writing code to manage configuration, deployment, and automatic provisioning.   Managing data centers with machine-readable definition files, rather than physical hardware configuration.   Ensuring all your servers and other infrastructure components are provisioned consistently and effortlessly.   Administering cloud computing environments, also known as infrastructure as a service (IaaS). 







44.

Why Has DevOps Gained Prominence over the Last Few Years?

Before talking about the growing popularity of DevOps, discuss the current industry scenario. Begin with some examples of how big players such as Netflix and Facebook are investing in DevOps to automate and accelerate application deployment and how this has helped them grow their business. Using Facebook as an example, you would point to t o Facebook‘s continuous deployment and code ownership models and how these have helped h elped it scale up but ensure the quality of experience at the same time. Hundreds of lines of code are implemented im plemented without affecting quality, stability, and security. Your next use case should be Netflix. This streaming and on-demand video company follow similar practices with fully automated processes and systems. Mention the user base of these two organizations: Facebook has 2 billion users while Netflix streams online content to more than 100 million users worldwide. These are great examples of how DevOps can help organizations to ensure higher success rates for releases, reduce the lead time between bug fixes, streamline and continuous delivery through automation, and an overall reduction in manpower costs.

45.

Describe configuration management.

Configuration management systems are software systems that allow managing an environment in a consistent, reliable, and secure way.

 

By using an optimized domain-specific language (DSL) to define the state and configuration of system components, multiple people can work and store the system configuration of thousands of servers in a single place. CFEngine was among the first generation of modern enterprise solutions for configuration management. Their goal was to have a reproducible environment by automating things such as installing software and creating and configuring users, groups, and responsibilities. Second generation systems brought configuration management to the masses. mass es. While able to run in standalone mode, Puppet and Chef are generally configured in master/agent mode where the master distributes configuration to the agents.  Ansible is new compared to the aforementioned solutions and popular because of the simplicity. The configuration is stored in YAML and there is no central server. The state configuration is transferred to the servers through SSH (or WinRM, on Windows) and then executed. The downside of this procedure is that it can become slow when managing thousands of machines. 46. What is the difference between orchestration and classic automation? What are some common orchestration solutions?

Classic automation covers the automation of software installation and system configuration such as user creation, permissions, security baselining, while orchestration is more focused on the connection co nnection and interaction of existing and provided services. (Configuration management covers both classic automation and orchestration.) Most cloud providers have components for application servers, caching servers, block storage, message queueing databases etc. They can usually be configured for automated backups and logging. Because all these components are provided by the cloud provider it becomes a matter of orchestrating these components to create an infrastructure solution. The amount of classic automation necessary on cloud environments depends on the number of components available to be used. The more existing components there are the less classic automatic is necessary. In local or On-Premise environments you first have to automate the creation of these components before you can orchestrate them. For AWS a common solution is CloudFormation, with lots of different types of wrappers around it. Azure uses deployments and Google Cloud has the Google Deployment Manager.

 

 A common orchestration solution that is cloud-provider-agnostic is Terraform. While it is closely tied to each cloud, it provides a common state definition language that defines resources (like virtual machines, networks, and subnets) and data (which references existing state on the cloud.) Nowadays most configuration management tools also provide components to manage the orchestration solutions or APIs provided by the cloud providers.

47.

Explain Continuous Monitoring.

 As the application is developed and deployed, we do need to monitor its performance. Monitoring is also very important as it might help to uncover the defects which might not have been detected earlier. 48. Can DevOps be applied to a Waterfall process? Explain the significance of the Agile process in DevOps implementation.

In the waterfall process, as is alldesigned, of us are Implementation aware initially c complete omplete Requirements gathered, next the System of the System is then are done followed by System testing and deployed to the end-users. In this process, the problem was that there was a huge waiting time for build and deployment which made it very difficult to get the feedback. The solution to the above problem was that t hat the Agile process has to bring in agility in both development and operations. The agile process proce ss could be the principal or a certain pre-requisite may be required for DevOps implementation. DevOps goes hand in hand with the Agile process. The focus area is to release the software in a very timely manner with shorter release cycles and quick feedback. So, the agile process focus will mainly be on speed and in DevOps, it works well with the automation of various tools. 49.

What is your expertise on the DevOps projects?

Explain your role as a DevOps Engineer and how you were working as a part of the 24*7 environment and maybe in shifts, the projects involved in automating the CI and CD pipeline and providing support to the project teams. Hence, taking complete responsibility for maintaining and extending the environments for DevOps automation to more and more projects and different technologies (Example: .NET, J2EE projects) involved within the organization.

 

 Also, explain the process (Example Agile) and tools that were involved in an end to end automation. You could also talk about your experience, if any, in DevOps support over the Cloud environment. 50.

What are the top 10 DevOps tools that are used in the industry today?

The list includes:                    



        



 

Jira GIT/SVN Bitbucket Jenkins Bamboo SonarQube Artifactory/Nexus Docker Chef / Puppet /Ansible IBM Urbancode Deploy / CA-RA Nagios / Splunk

51. Can you explain the uses of the tools mentioned in the above question and how they connect to give a DevOps model (CI/CD)?

i) (1)

Planning Jira – Used Jira –  Used for Project Planning and Issue management

ii) (1) (2)

Continuous Integration Git – Version Git –  Version Control Jenkins – Open Jenkins –  Open Source Continuous Integration tool which can c an also help in

Continuous Delivery. (3) SonarQube – Code SonarQube –  Code Analysis (4) JFrog Artifactory Artifactory –  – Binary  Binary Repository Manager iii) Continuous Delivery (1) Chef / Puppet / Ansible Ansible –  – Configuration  Configuration Management and Application Deployment (2) IBM Urbancode Deploy / CA RA RA –  – Continuous  Continuous Delivery iv) (1)

Continuous Monitoring Nagios / Splunk

 

Sample DevOps Workflow:

  Typically in an Agile process user stories, tasks, defects, etc., are all stored in JIRA and assigned to the Product Owners and Developers.   Developers pick up the tasks assigned to them and work on the development. The source code is version controlled and stored in GIT. The developers commit their changes to the source s ource code in GIT. Eventually, the code is shared among the developers using GitHub.   Jenkins which is the Continuous Integration tool pulls the code and on every check-in or based on a schedule the build is done using build tools like Maven or  ANT.   As the J2EE WAR files are produced, they are also version controlled and stored in a binary repository manager like Artifactory or Nexus.   Unit Testing using JUnit and Code Analysis with SonarQube is also done and automated   Once the above process is completed the Continuous Delivery is performed to different environments based on approvals using tools like IBM UrbanCode 











Deploy / CA RA Continuous Testing (Functional and Acceptance Testing) is invoked in the appropriate test environments using tools t ools like Selenium   Continuous Monitoring would be an ongoing activity in the PROD environment 

52.

Which scripting tools are used in DevOps?

Python, Ruby . 53. Explain the typical roles involved in DevOps.

 

  DevOps Architect: The leader who is responsible for the entire DevOps process.   DevOps Engineer: The person should be experienced with Agile, SCM or Version Control, CI /CD and setting up automation tools for the same, Infrastructure automation and Database management skills. Any developer who has skills in coding or scripting and has the acumen to get into deployment or system admin can qualify for the role of a DevOps engineer.





54.

Explain some of the metrics that were followed for DevOps success.

Some of the examples are as follows:   The first and most important factor is the speed of delivery which means time taken for any work item to get into the production environment.   Next would be the deployment and how much time it would take once this process is automated.   It is almost necessary to track how many defects are found in different environments with respect to the PRODUCTION environment. This is very important in considering the features that need to be released faster. The use of  Agile methodologies helps a lot and the prime goal is to reduce PRODUCTION level defects.   Normally deployments do not fail but it is very important to keep a track on this aspect and have a mechanism to roll back to the previous stable version.   In any DevOps implementation, unit testing is the key as well as functional testing. Based on the code changes done, often we need to look at whether these test breaks and to what extent. It is imperative that the automated test is robust enough to sustain any code changes.   It is very important to measure the actual or the average time that it takes to recover in case of a failure f ailure in the PRODUCTION environment. This is termed as 











Mean Time To Recover (MTTR) and it should be short. This also means that one needs to have proper monitoring tools to keep recovery time short.   Performance of the application is another key metric that should be monitored especially after any deployments are done.   A very important factor for success is the number of bugs being reported by the customers which primarily depends on the quality of the application. 



55.

What are your expectations from a career perspective of DevOps?

To be involved in the end to end delivery process and the most important aspect of helping to improve the process so s o as to enable the development and operations teams to work together and understand each other‘s point of view.

 

  56.

What is a Virtual Private Cloud or VNet?

Cloud providers allow fine grained control over the network plane for isolation of components and resources. In general there are a lot of similarities among the usage concepts of the cloud providers. But as you go into the details there are some fundamental differences between how various cloud providers handle h andle this segregation. In Azure this is called a Virtual Network (VNet), while AWS and Google Cloud Engine (GCE) call this a Virtual Private Cloud (VPC). (VP C). These technologies segregate the networks with subnets and use non-globally routable IP addresses. Routing differs among these technologies. While customers have to specify routing tables themselves in AWS, all resources in Azure VNets allow the flow of traffic using the system route. Security policies also contain notable differences between the various cloud providers. 57.

How do you build a hybrid cloud?

There are multiple ways to build a hybrid cloud. A common way is to create a VPN tunnel between the on-premise network and the cloud VPC/VNet.  AWS Direct Connect or Azure ExpressRoute bypasses the public internet and establishes a secure connection between a private data center and the VPC. This is the method of choice for large production deployments.

58.

How do you design a self-healing distributed service?

 Any system that is supposed to be capable of healing itself needs to be able to handle faults and partitioning (i.e. when part of the system cannot access the rest of the system) to a certain extent. For databases, a common way to deal with partition tolerance is to use a quorum for writes. This means that every time something is written, a minimum number of nodes must confirm the write. The minimum number of nodes necessary to gracefully recover from a singlenode fault is three nodes. That way the healthy two nodes can confirm the state of the system.

 

For cloud applications, it is common to distribute these three nodes across three availability zones. 59.

Describe a centralized logging solution.

Logging solutions are used for monitoring system health. Both events and metrics are generally logged, which may then be processed by alerting systems. Metrics could be storage space, memory, load or any other kind of diverge continuous t hat is constantly being monitored. It allows detecting events that fromdata a that baseline. In contrast, event-based logging might cover events such as application exceptions, which are sent to a central location for further processing, analysis, or bug-fixing.  A commonly used open-source logging solution is the Elasticsearch-KibanaLogstash (ELK) stack. Stacks like this generally consist of three components: 1. A storage component, e.g. Elasticsearch. 2. A log or metric ingestion daemon such as Logstash or Fluentd. It is responsible for ingesting large amounts of data and adding or processing metadata while doing so. For example, it might add geolocation information for IP addresses. 3. A visualization solution such as Kibana to show important visual representations of system state at any given time. Most cloud solutions either have their own centralized logging solutions that contain one or more of the aforementioned af orementioned products or tie them into their existing infrastructure. AWS CloudWatch, for example, contains all parts described above and is heavily integrated into every component of AWS, while also allowing parallel exports of data to AWS S3 for cheap long-term storage.  Another popular commercial solution for centralized logging and analysis both on premise and in the cloud is Splunk. Splunk is considered to be very scalable s calable and is also commonly used as Security Event Management (SIEM) system and has advanced table andInformation data modeland support.

Docker and Kubernetes 1.

Explain the architecture of Docker.

  Docker uses a client-server architecture.   Docker Client is a service that runs a command. The command is translated using the REST API and is sent to the Docker Daemon (server). 





  Docker Daemon accepts the request and interacts with the operating system to build Docker images and run Docker containers.

 

  A Docker image is a template of instructions, which is used to create containers.   Docker container is an executable package of an application and its dependencies together.   Docker registry is a service to host and distribute Docker images among users







2.

What are the advantages of Docker over virtual machines?

Criteria 

Virtual Machine

Docker  

Memory space 

Occupies a lot of memory space

Docker containers occupy less space

Boot-up time  Long boot-up time

Short boot-up time

Running multiple virtual Performance  machines leads to unstable performance

Containers have a better performance, as they are hosted in a single Docker engine

Scaling 

Difficult to scale up

Easy to scale up

Efficiency 

Low efficiency

High efficiency

Portability 

Compatibility issues while porting across different platforms

Easily portable across different platforms

Space allocation 

Data volumes cannot be shared

Data volumes are shared and used again across multiple containers

 

  3.



 

How do we share Docker containers with different nodes?

It is possible to share Docker containers on different nodes with Docker

Swarm.   Docker Swarm is a tool that allows IT administrators and developers to create and manage a cluster of swarm nodes within the Docker platform.    A swarm consists of two types of nodes: a manager node and a worker node.





4.

What are the commands used to create a Docker swarm?

Create a swarm where you want to run your manager node. Docker swarm init --advertise-addr 

 

Once you've created a swarm on your manager node, you can add worker nodes to your swarm.



 

When a node is initialized as a manager, it immediately creates a token. In order to create a worker node, the following command (token) should be executed on the host machine of a worker node. docker swarm join \ --token SWMTKN-149nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv8vxv8rssmk743ojnwacrr2e7c \ 192.168.99.100:2377  

 

5.

How do you run multiple containers using a single service?

 

 

It is possible to run multiple containers as a single service with Docker Compose.   Here, each container runs in isolation but can interact with each other.    All Docker Compose files are YAML files.



 





6.

What is a Dockerfile used for?

 A Dockerfile is used for creating Docker images using the build command.   With a Docker image, any user can run the code to create Docker containers.   Once a Docker image is built, it's uploaded in a Docker registry.   From the Docker registry, users can get the Docker image and build new containers whenever they want. 

 







7. Explain the differences between Docker images and Docker containers. Docker Images

Docker Container  

Docker images are templates of Docker containers

Containers are runtime instances of a Docker image

 An image is built using a Dockerfile

Containers are created using

 

Docker images It is stored in a Docker repository or a Docker hub

They are stored in the Docker daemon

The image layer is a readonly filesystem

Every container layer is a readwrite filesystem

8. Instead of YAML, what can you use as an alternate file for building Docker compose?

To build a Docker compose, a user can use a JSON file instead of YAML. In case a user wants to use a JSON file, he/she should specify the filename as given: Docker-compose -f Docker-compose.json up  9.

How do you create a Docker container?

Task: Create a MySQL Docker container

 A user can either build a Docker image or pull an existing Docker image (like MySQL) from Docker Hub. Now, Docker creates a new container MySQL from the existing Docker image. Simultaneously, the container layer of the read-write filesystem is also created on top of the image layer.   Command to create a Docker container: Docker run -t –i MySQL   Command to list down the running containers: Docker ps  



10.

11.

What is the difference between a registry and a repository? Registry 

Repository 

 A Docker registry is an opensource server-side service used for hosting and distributing Docker images In a registry, a user can distinguish between Docker images with their tag names Docker also has its own default registry called Docker Hub

The repository is a collection of multiple versions of Docker images It is stored in a Docker registry It has two types: public and private repositories

What are the cloud platforms that support Docker?

 

  The following are the cloud platforms that Docker runs on:    Amazon Web Services   Microsoft Azure   Google Cloud Platform   Rackspace 







12.

What is the purpose of the expose and publish commands in Docker?

Expose

Expose is an instruction used in Dockerfile.   It is used to expose ports within a Docker network.   It is a documenting instruction used at the time of building an image and running a container. 

 









   

Expose is the command used in Docker. Example: Expose 8080 

Publish   Publish is used in a Docker run command.   It can be used outside a Docker environment.   It is used to map a host port to a running container port.   --publish or –p is the command used in Docker.   Example: docker run –d –p 0.0.0.80:80 











Continuous Monitoring 1. How does Nagios help in the continuous monitoring of systems, applications, and services?

Nagios enables server monitoring and the ability to check if they are sufficiently utilized or if any task failures need to be addressed.   Verifies the status of the servers and services   Inspects the health of your infrastructure   Checks if applications are working correctly and web servers are reachable 





 

2. How does Nagios help in the continuous monitoring of systems, applications, and services?

3. What do you mean by Nagios Remote Plugin Executor (NPRE) of Nagios?

Nagios Remote Plugin Executor (NPRE) enables you to execute Nagios plugins on Linux/Unix machines. You can monitor remote machine metrics (disk usage, CPU load, etc.) 

 



 

4.

The check_npre plugin that resides on the local monitoring machine The NPRE daemon that runs on the remote Linux/Unix machine

What are the port numbers that Nagios uses for monitoring purposes?

Usually, Nagios uses the following port numbers for monitoring:

5.

What are active and passive checks in Nagios?

 

Nagios is capable of monitoring hosts and services in two ways: Actively    Active checks are initiated as a result of the Nagios process    Active checks are regularly scheduled Passively   Passive checks are initiated and performed through t hrough external applications/processes 







 

6.

Passive checks results are submitted to Nagios for processing What are active and passive checks in Nagios?

 Active Checks:   The check logic in the Nagios daemon initiates active checks.   Nagios will execute a plugin and pass the information on what needs to be checked.   The plugin will then check the operational state of the host or service, and report results back to the Nagios daemon.   It will process the results of the host or service check and send notifications. 







Passive Checks:    In passive checks, an external application checks the status of a host or service.   It writes the results of the check to the external command file.   Nagios reads the external command file and places the results of all passive checks into a queue for later processing.   Nagios may send out notifications, log alerts, etc. depending on the check result information. 

 



 

 

7.

Explain the main configuration file and its location in Nagios.

The main configuration file consists of several directives that affect how Nagios operates. The Nagios process and the CGIs read the config file.  A sample main configuration file will be placed into your settings directory:  /usr/local/Nagios/etc/resource.cfg 

8.

What is the Nagios Network Analyzer?

  It provides an in-depth look at all network traffic sources and security threats.   It provides a central view of your network traffic and bandwidth data. 



  It allows system admins to gather high-level information on the health of the network.   It enables you to be proactive in resolving outages, abnormal behavior, and threats before they affect critical business processes. 



9. What are the benefits of HTTP and SSL certificate monitoring with Nagios? HTTP certificate monitoring    Increased server, services, and application availability.





   



Fast detection of network outages and protocol failures. Enables web transaction and web server performance monitoring.

 

  SSL certificate monitoring    Increased website availability.   Frequent application availability.   It provides increased security.

  

10.

virtualization with Nagios.

Nagios can run on different virtualization platforms, like VMware, Microsoft Visual PC, Xen, Amazon EC2, etc.   Provides the capabilities to monitor an assortment of metrics on different platforms   Ensures quick detection of service and application failures   Has the ability to monitor the following metrics:   CPU Usage   Memory   Networking   VM status   Reduced administrative overhead 









 



11. Name the three variables that affect recursion and inheritance in Nagios

name - Template name that can be referenced in other object definitions so it can inherit the object's properties/variables. use - Here, you specify the name of the template object that you want to inherit properties/variables from. register - This variable indicates whether or not the object definition should be registered with Nagios. define someobjecttype{  object-specific variables ….  name template_name 

 

use name_of_template  register [0/1]  } 

 

12.

Why is Nagios said to be object-oriented?

Using the object configuration format, you can create object definitions that inherit properties from other object definitions. Hence, Nagios is known as object-oriented. Types of Objects:     

       

13.

Services Hosts Commands Time Periods

Explain what state stalking is in Nagios.

State stalking is used for logging purposes in Nagios.   When stalking is enabled for a particular host or service, Nagios will watch that host or service very carefully.   It will log any changes it sees in the output of check results.   This helps in the analysis of log files.



 







Source Code Management — Git

1. Explain the difference between a centralized and distributed version control system (VCS) Centralized Version Control System    All file versions are stored on a central server   No developer has a copy of all files on a local system   If the central server crashes, all data from the project will be lost







 

  Distributed Control System   Every developer has a copy of all versions of the code on their systems   Enables team members to work offline and does not rely on a single location for backups   There is no threat, even if the server crashes







2. What is the git command that downloads any repository from GitHub to your computer?

The git command that downloads any repository from GitHub to your computer is git clone. 

3.

How do you push a file from your local system to the GitHub

repository using Git?

 

First, connect the local repository to your remote repository: git remote add origin [copied web address]  // Ex: git remote add origin  origin https://github.com/Simplilearn-github/test.git   Second, push your file to the remote repository: git push origin master   4.

How is a bare repository different from the standard way of initializing

a Git repository?

Using the standard method: git init   You create a working directory with git init    A .git subfolder is created with all the git-related revision history Using the bare way git init --bare   It does not contain any working or checked out a copy of source files   Bare repositories store git revision history in the root folder of your repository, instead of the .git subfolder 







5.

Which of the following CLI commands can be used to rename files?

1. 2. 3. 4.

git rm git mv git rm -r None of the above

The correct answer is B) git mv 6. What is the process for reverting a commit that has already been pushed and made public?

There are two ways that you can revert a commit: 1. Remove or fix the bad file in a new commit and push it to the remote repository. Then commit it to the remote repository using: git commit –m "commit message"  2. Create a new commit that undoes all the changes that were made in the bad commit. Use the following command: git revert   Example: git revert 56de0938f   7. Explain the difference between git fetch and git pull. Git fetch  Git pull 

 

Git fetch only downloads new data from a remote repository

Git pull updates the current HEAD branch with the latest changes from the remote server

Does not integrate any new data into i nto your working files

Downloads new data and integrate it with the current working files

Users can run a Git fetch at any time

Tries to merge remote changes with your

to update the remote-tracking branches Command - git fetch origin  git fetch –-all 

local ones

8.

Command - git pull origin master  

What is Git stash?

 A developer working with a current branch wants to switch to another branch to work on something else, but the developer doesn't want to commit changes to your unfinished work. The solution to this issue is Git stash. Git stash takes your modified tracked files and saves them on a stack of unfinished changes that you can reapply at any time.

9.

Explain the concept of branching in Git.

Suppose you are working on an application, and you want to add a new feature to the app. You can create a new branch and build the new feature on that branch.   By default, you always work on the master branch   The circles on the branch represent various commits made on the branch    After you are done with all the changes, you can merge it with the master branch 





 

10.

  What is the difference between Git Merge and Git Rebase?

Suppose you are working on a new feature in a dedicated branch, and another team member updates the master branch with new commits. You can use these two functions: Git Merge  To incorporate the new commits into your feature branch, use Git merge.   Creates an extra merge commit every time you need to incorporate changes   But, it pollutes your feature branch history 



Git an Rebase    As alternative to merging, you can rebase the feature branch on to master.   Incorporates all the new commits in the master branch   It creates new commits for every commit in the original branch and rewrites project history 



 

  11. How do you find a list of files that have been changed in a particular commit?

The command to get a list of files that have been changed in a particular commit is: git diff-tree –r {commit hash}  Example: git diff-tree –r 87e673f21b    -r flag instructs the command to list individual files   commit hash will list all the files that were changed or added in that commit 



12.

What is a merge conflict in Git, and how can it be resolved?

 A  A  Git merge conflict  conflict happens when you have merge branches with competing for commits, and Git needs your help to decide which changes to incorporate in the final merge.

Manually edit the conflicted file to select the changes that you want to keep in the final merge. Resolve using GitHub conflict editor   This is done when a merge conflict is caused after competing for line changes. For example, this may occur when people make different changes to the same line of the same file on different branches in your Git repository.

 



 



 

Resolving a merge conflict using conflict editor : Under your repository name, click "Pull requests."

In the "Pull requests" drop-down, click the th e pull request with a merge conflict that you'd like to resolve   Near the bottom of your pull request, click "Resolve conflicts."



 



Decide if you only want to keep your branch's changes, the other ot her branch's changes, or make a brand new change, which may incorporate changes from both branches.   Delete the conflict markers  and make changes you want in the final merge.



 



If you have more than one merge conflict in your file, scroll down to the next set of conflict markers and repeat steps four and five to resolve your merge conflict.





 

 

Once you have resolved all the conflicts in the file, click Mark as resolved. 

 

    If you have more than one file with a conflict, select the next file you want to edit on the left side of o f the page under "conflicting files" and repeat steps four to seven until you've resolved all of your pull request's merge conflicts.



Once you've resolved your merge conflicts, click Commit merge. This merges the entire base branch into your head branch.



 



 



 



 



 

To merge your pull request, click Merge pull request.  A merge conflict is resolved using the command line. Open Git Bash. Navigate into the local Git repository that contains the merge conflict.

Generate a list of the files that the merge conflict affects. In this example, the file styleguide.md has a merge conflict.



 

 

  navigate to the file Open any text editor, such as Sublime Text or Atom, and that has merge conflicts.   To see the beginning of the merge conflict c onflict in your file, search tthe he file for the conflict marker "
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF