Installing and Working With CentOS 7 x64 and KVM

June 18, 2016 | Author: danxl007 | Category: N/A
Share Embed Donate


Short Description

Installing and Working With CentOS 7 x64 and KVM...

Description

Installing and Working With CentOS 7 x64 and KVM

Section A: Installing KVM on Centos 7 x64 Minimal Checking for Virtualization Support Just to be clear, we’ll need to check for virtualization support; that it is enabled in the BIOS and also enabled on kernel. To check if the kernel has virtualization support, run this command: egrep '(vmx|svm)' --color=always /proc/cpuinfo

If the device supports virtualization, you will see either ‘vmx‘ or ‘svm‘ highlighted. VMX is the Intel flag, and SVM is the AMD flag.

Install Dependencies Next, well want to get some dependencies going. Since You’ve already updated your OS to the latest patched version, we can install the software. I’ll spare you all the drama in the pre, as dependencies put mine at 147 total installed items, but just know that this is the command you’ll run to get KVM and associated tools installed: yum -y install kvm virt-manager libvirt virt-install qemu-kvm xauth dejavu-lgc-sansfonts virt-viewer

What are you installing? Here are some explanations. KVM: A full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). Virt-Manager: A desktop user interface for managing virtual machines through libvirt. Libvirt: A toolkit to interact with the virtualization capabilities of recent versions of Linux. Virt-Install: A command line tool for creating new KVM container guests using the "libvirt"hypervisor management library. Qemu-kvm: A Linux kernel module that allows a user space program to utilize the hardware virtualization features of various processors. Dejavu-lgc-sans-fonts: A font family based on the Vera Fonts. virt-viewer: A minimal tool for displaying the graphical console of a virtual machine.

Create Networking The KVM host acts as a router to route traffic in and out of it’s interfaces. It uses NAT to translate the packets across the interfaces. We’ll have to set up our interfaces to act as usable devices for KVM. First step is to allow the kernel to do forwarding: echo "net.ipv4.ip_forward = 1"|sudo tee /etc/sysctl.d/99-ipforward.conf sudo sysctl -p /etc/sysctl.d/99-ipforward.conf

Next, we’ll want to turn the external interface into a bridge. This allows traffic to be routed across the

interface. Start by looking at /etc/sysconfig/network-scripts/ and see whats listed: ls /etc/sysconfig/network-scripts/ ifcfg-em1 ifdown-bnep ifdown-ipv6 ifdown-ppp ifdown-Team ifup ifup-eth ifup-isdn ifup-post ifup-sit ifup-tunnel network-functions ifcfg-lo ifdown-eth ifdown-isdn ifdown-routes ifdown-TeamPort ifup-aliases ifup-ippp ifup-plip ifup-ppp ifup-Team ifup-wireless network-functions-ipv6 ifdown ifdown-ippp ifdown-post ifdown-sit ifdown-tunnel ifup-bnep ifup-ipv6 ifup-plusb ifup-routes ifup-TeamPort init.ipv6global

This lets me know that I’ve got an interface on/etc/sysconfig/network-scripts/ifcfg-em1, sounds good. I’ve only got one physical interface on the device I am working with. We’ll edit this file and make some changes (if you don’t know how to use VI, read this): vi /etc/sysconfig/network-scripts/ifcfg-em1

My initial file looks like this: HWADDR="xx:xx:xx:xx:xx:xx" TYPE="Ethernet" BOOTPROTO="dhcp" DEFROUTE="yes" PEERDNS="yes" PEERROUTES="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_PEERDNS="yes" IPV6_PEERROUTES="yes" IPV6_FAILURE_FATAL="no" NAME="em1" UUID="297f77a2-b6ec-4b79-b5db-59590f902d81" ONBOOT="yes"

We’re going to remove/comment out the IP information. We’re also going to add the BRIDGE variable, pointing to a file we’re going to make next. Change your interface to look somewhat like this: #BOOTPROTO="dhcp" #DEFROUTE="yes" #PEERDNS="yes" #PEERROUTES="yes" #IPV4_FAILURE_FATAL="no" #IPV6INIT="yes" #IPV6_AUTOCONF="yes" #IPV6_DEFROUTE="yes" #IPV6_PEERDNS="yes" #IPV6_PEERROUTES="yes" #IPV6_FAILURE_FATAL="no" DEVICE=em1 BOOTPROTO=static ONBOOT=yes BRIDGE=br0 TYPE=Ethernet

Now, lets edit the “br0″ device before the computer finds out what we’ve done! We’lll edit the /etc/sysconfig/network-scripts/ifcfg-br0 and make it nice looking and simple like this: DEVICE=br0 TYPE=Bridge BOOTPROTO=static ONBOOT=yes IPADDR=xx.xx.xx.xx

NETMASK=xx.xx.xx.xx GATEWAY=xx.xx.xx.xx DNS1=xx.xx.xx.xx

Go ahead and save that file so that the system can read it.

Services Up Next up, let’s start the libvirtd service: systemctl start libvirtd systemctl enable libvirtd

Next, lets reboot the machine. reboot

That will reboot the system. If you are logged in via a SSH session, you’ll get booted.

KVM Up Now that we are back up, let’s make sure that KVM is happy and added itself properly to our modules: lsmod|grep kvm

You should get an output like this: kvm_intel kvm

138567 441119

0 1 kvm_intel

Next we can just double check that our bridge is up by running: ip a show br0 | grep UP

This will let you know if the br0 interface is up. I don’t know about you, but I am SSH’d into this box, so I KNOW it’s up. Lastly, we can query qemu and see if we can hit KVM: sudo virsh -c qemu:///system list Id

Name

State

————————————————— This looks good on my end! Let’s get on with it!

Section B: Configuring and Using KVM Our First Virtual Machine  Templates Before we make a VM, let’s query KVM to see what kind of templates that we have. You can query KVM like this: virt-install --os-variant=list win7 : vista : winxp64 winxp : win2k : win2k8

Microsoft Windows 7 Microsoft Windows Vista : Microsoft Windows XP (x86_64) Microsoft Windows XP Microsoft Windows 2000 : Microsoft Windows Server 2008

win2k3 : Microsoft Windows Server 2003 openbsd4 : OpenBSD 4.x freebsd8 : FreeBSD 8.x freebsd7 : FreeBSD 7.x freebsd6 : FreeBSD 6.x solaris9 : Sun Solaris 9 solaris10 : Sun Solaris 10 opensolaris : Sun OpenSolaris netware6 : Novell Netware 6 netware5 : Novell Netware 5 netware4 : Novell Netware 4 msdos : MS-DOS generic : Generic debianwheezy : Debian Wheezy debiansqueeze : Debian Squeeze debianlenny : Debian Lenny debianetch : Debian Etch fedora19 : Fedora 19 fedora18 : Fedora 18 fedora17 : Fedora 17 fedora16 : Fedora 16 fedora15 : Fedora 15 fedora14 : Fedora 14 fedora13 : Fedora 13 fedora12 : Fedora 12 fedora11 : Fedora 11 fedora10 : Fedora 10 fedora9 : Fedora 9 fedora8 : Fedora 8 fedora7 : Fedora 7 fedora6 : Fedora Core 6 fedora5 : Fedora Core 5 mageia1 : Mageia 1 and later mes5.1 : Mandriva Enterprise Server 5.1 and later mes5 : Mandriva Enterprise Server 5.0 mandriva2010 : Mandriva Linux 2010 and later mandriva2009 : Mandriva Linux 2009 and earlier rhel7 : Red Hat Enterprise Linux 7 rhel6 : Red Hat Enterprise Linux 6 rhel5.4 : Red Hat Enterprise Linux 5.4 or later rhel5 : Red Hat Enterprise Linux 5 rhel4 : Red Hat Enterprise Linux 4 rhel3 : Red Hat Enterprise Linux 3 rhel2.1 : Red Hat Enterprise Linux 2.1 sles11 : Suse Linux Enterprise Server 11 sles10 : Suse Linux Enterprise Server opensuse12 : openSuse 12 opensuse11 : openSuse 11 ubuntusaucy : Ubuntu 13.10 (Saucy Salamander) ubunturaring : Ubuntu 13.04 (Raring Ringtail) ubuntuquantal : Ubuntu 12.10 (Quantal Quetzal) ubuntuprecise : Ubuntu 12.04 LTS (Precise Pangolin) ubuntuoneiric : Ubuntu 11.10 (Oneiric Ocelot) ubuntunatty : Ubuntu 11.04 (Natty Narwhal) ubuntumaverick : Ubuntu 10.10 (Maverick Meerkat) ubuntulucid : Ubuntu 10.04 LTS (Lucid Lynx) ubuntukarmic : Ubuntu 9.10 (Karmic Koala) ubuntujaunty : Ubuntu 9.04 (Jaunty Jackalope) ubuntuintrepid : Ubuntu 8.10 (Intrepid Ibex) ubuntuhardy : Ubuntu 8.04 LTS (Hardy Heron) virtio26 : Generic 2.6.25 or later kernel with virtio generic26 : Generic 2.6.x kernel generic24 : Generic 2.4.x kernel

Well, that’s a good start!

 SELinux One thing we’ll need to work with is SELinux. We don’t want to disable SElinux, because that is what the ‘feint of heart’ do; we embrace it. First, install policycoreutils-python: yum -y install policycoreutils-python

After that gets installed, we can run the semanage utility. If you intend on putting the virtual machines anywhere other than /var/lib/libvirt, you’ll want to run the semanage utility on the directory where we want the VM images stored. In my case, I have a directory at /opt/, so I’ll run it on /opt/3TB/VirtualMachines. First, create the directory: mkdir -p /opt/3TB/VirtualMachines

Then, set SELinux: semanage fcontext -a -t virt_image_t "/opt/VirtualMachines(/.*)?" restorecon -R /opt/VirtualMachines

That will open up my /opt/VirtualMachines for SELinux. 

Firewall-CMD (optional, not needed if tunneling the traffic)

The new IPTables. You’ll want to open up the port for VNC connections to console on the virtual machines. You can do that with this command: firewall-cmd --zone=public --add-port=5900/tcp --permanent firewall-cmd --reload

Thats going to open port 5900 TCP up to VNC to console. 

Create the Virtual Machine

We’ll use the ‘virt-install’ command to create the virtual machine. Here are some of the options to use with virt-install:  –connect # Keyto connect to a server, well use the value{qemu:///system} for this command.  –n # The name of the Virtual Machine guest.  –r # The amount, in megabytes, of RAM you want to add to the system.  –vcpus=x # The number of CPUs to assign to the Virtual Machine, replace x with the number of CPUs.  –disk # The location of the virtual machine disk file. Pass {path=/path/to/file.img,size=x} as the argument for this key. Where x in the argument, pass an integer, it will be the size in gigabytes.  –graphics # How to display the console of the virtual machine. Pass {vnc,listen=0.0.0.0} to allow a VNC connection to pass through to any ip address.  –noautoconsole # Do not automatically connect to the console of the virtual machine.  –os-type # General flavor of the operating system. Can pass {windows} to use a Microsoft variant.  –os-variant # The specific operating system, pass {win2k8} as the argument.  –accelerate # To use the hardware-assisted acceleration.  –network= # Pass the {bridge=br0} to specify the bridge we created earlier.  –hvm # To use full virtualization on the virtual machine.  –cdrom # Pass the {/path/to/file.iso} to link up a virtual CDROM onto the machine.

You can always pass the “-h”. You can also gather your favorite options from the virt-install website. I took a gander at the website and came up with this for a Server 2008 R2 machine (note I pre-staged the ISO file in /opt/ISO/: virt-install --connect qemu:///system --graphics vnc,listen=0.0.0.0 --name=NPGENERALS01 --ram=4096 --vcpus=2 --cdrom=/opt/ISO/Server2008R2.iso --osvariant=win2k8 --disk /opt/VirtualMachines/NPGNERALS01.img,size=60 --network=bridge:br0 --autostart

This gave me some nice ‘getting it done’ output: Starting install... Creating storage file NPGNERALS01.img Creating domain... Connected to domain NPGENERALS01 Escape character is ^] Domain installation still in progress. Waiting for installation to complete.

At this point, you are going to need to connect to the server via VNC on port 5900. I’m using OSX Yosemite. I could not use the built-in VNC, nor could I use realVNC. Fortunately Chicken VNC worked just fine. Here’s a screenshot connecting into this Server 2008R2 machine:

From there, you can run your install routine.

Considerations and Management Commands 

Notes about VNC/Firewall-CMD/SSH

From this point, you can get fancy with the virt-install man page and install a linux host or what have you. There is something to be said about the firewall, VNC, and new machines. Each machine you create increments a port up from 5900. The first VM will be 5900, the second will be 5901, and so on. Your firewall will have to be either opened on those ports as I demonstrated earlier, or you need to tunnel the traffic via SSH. You can always find the VNC port of the guest machine by this command: virsh vncdisplay {servername}

 Management Commands The virsh command will get you through all the things that you needed to do. Namely you can 1. 2. 3. 4. 5. 6. 1.

Get a list of the guests with {virsh —connect qemu:///system list} Get more info on a guest with {virsh dominfo {servername}} Shutdown a guest with {virsh —connect qemu:///system shutdown {servername}} Force reboot a guest with {virsh —connect qemu:///system destroy {servername}} Power on a guest with {virsh —connect qemu:///system start {servername}} Delete a guest {virsh —connect qemu:///system destroy {servername}}

2. 3.

{virsh —connect qemu:///system undefine {servername}} {rm -Rf /path/to/servername.iso}

Conclusion This has been a simple rundown on installing KVM on CentOS 7 x64. I hope I have taken into consideration everything I needed. Feel free to drop me an email if something is awry. Happy admining.

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF