Archive for the ‘Virtualisation’ Category

Proxmox VE 1.3 released

juni 10, 2009

VE 1.3 is released on the 4th of June.

The following things have been changed in Proxmox VE 1.3:

  • Updated Kernel
  • Support for Highpoint RR3120
  • Removed OpenVZ limit of 50 machines
  • Update to kvm-86
  • Vzdump: minor bug fixes
  • Qemu-server: added new ‘tablet’ option

Download the ISO image, burn it to CD-ROM and boot your server from CD-ROM. For details see Installation

  • Download via http: Proxmox VE 1.3
  • Download via bittorrent: Proxmox VE 1.3(MD5SUM is 6ddefdba42121ea24b4e1a3b17d93355)Important note

    If you run windows KVM guests your operating system will recognize some hardware changes on the first boot and windows shows a new network card in the device manager. (e.g. you need to reassign the fixed IP setup to the new network card. if you have DHCP setup there should be no issues). Under some circumstances reactivation of your windows license can be necessary.

A proxmox cluster

mei 29, 2009

For the dutch reading humans I have translated the proxmox how to make a cluster section which you can find on my site.

You can now see how easy it is to setup a proxmox cluster and then mirgrate your virtual machines from node to node ūüėČ

Proxmox an update!

mei 23, 2009

Due to my interest in virtualisation technologies I like to try things. And as mentioned before I think proxmox will be my choice of virtualisation technology at home.

Why? because it is free, it works!!

 

Today I have converted my Free Citrix Xenserver to proxmox, this due to an error on the xenserver with a vm, and second because I needed a second KVM compatible machine.

I also did a migration from the master node to the newly node from a windows xp machine.  And it went smoothly. I am still very impressed by it. But I like it very much.

So you also want a cluster with proxmox? Here is how to do it:

On the machine which should be come the master node you type:

  • pveca -c

On the machine which should be come a node within the cluster you type:

  • pveca -a -h <ip address of the master node> ¬† ¬†e.g.: 192.168.1.100

 

And your cluster is done… yes that simple, of course the nodes are going to synchronize with each other.

After this you can do a live migration or a migration to the other node. Good luck and keep me posted how it went.

 

eislon

Proxmox live migration tested! And it….

mei 20, 2009

An update:

I have installed Proxmox on two different type of machines, the configuration is:

  1. AMD Athlon(tm) 64 X2 Dual Core Processor 4200+ with 4GB of Ram
  2. (Intel Atom)  Genuine Intel(R) CPU  230   @ 1.60GHz  with 2GB of Ram
  3. network = 100Mbit

With this configuration I have migrated a debian OpenVz  virtual machine from the AMD to the Intel Atom machine. And I can report to you it really works nicely. Also migrating the virtual machine back from the Intel Atom to the AMD X2 with online migration is working very nicely.

I am very impressed and this offers possibilities. I go so far that it will become my virtual enviroment of choice, but this after a test with migrating a KVM virtual machine back and forward, this ofcourse can’t be done with my current configuration, so I have to do an installation of proxmox on another machine with virtualisation technology inside it and this supported in the BIOS!¬† I will post my results here.

eislon.

Xen 3.4 is released

mei 19, 2009

The Xen developers  have released version 3.4 of thier  virtualisation technology.

This version  3.4.0 of the paravirtualisation Xen contains the first version of the Xen Client Initiative (XCI) code. XCI should be a basic-client-hypervisor, which can be enhanced and extended by the Open-Source community. Xen hopes that this will bring the system to other hardwareplatforms.

New features of Xen 3.4 are:

  • stability is improved;
  • System errors should be given and being isolated, which should help Xen to just run on.;
  • Xen is made greener (powersaving).

Xen 3.4.0 can be downloaded as source code here.

Source: Pro-Linux (in German)

Proxmox Virtual Enviroment

mei 18, 2009

Proxmox Virtual Environment is an easy to use Open Source virtualization platform for running Virtual Appliances and Virtual Machines.

It is able to use the OpenVZ and KVM virtualisation methodes, for the KVM you will need a processor which supports cpu virtualisation so it should have AMD-V or INTEL-VT technology on board. Many CPU’s have this, but mostly they are disabled, so it should be enabled into your BIOS , if this is possible!

I have had a quick look at Proxmox, cause the idea of a Open Source virtualisation enviroment would be to good to be true, yes there is a company behind it, and yes ofcourse they like to sell thier applications, but the Proxmox Virtual Enviroment is licensed under the GPL 2 license.

A quick look deserves a quick review:

  • Installation piece of cake (watch out your harddrive will be erased so make a backup)
  • amount of interactive action are little.
  • management is through a ajax based website (you need java to be installed for the VNC console).
  • First impression is very good.

I give you here the vision of proxmox:

“Setup a complete virtual server infrastructure within 1 hour.” Starting from bare metal, it is possible to create a full featured enterprise infrastructure including an email proxy, web proxy, groupware, wiki, web cms, crm, trouble ticket system, intranet … – including backup/restore and live migration.

Nowadays people are faced with more and more complex server software and installation methods. But Proxmox VE is different.

Proxmox VE is simple to use:

  • Pre-built Virtual Appliances
  • Install and manage with a view clicks
  • Selection of products for the use in the enterprise

Proxmox VE is licensed under GPLv2 (Open source). Open source and commercial Virtual Appliances are supported.

Source: proxmox

Hyper-V is doing cpu compatibility

mei 16, 2009

Microsoft is introducing: Processor Compatibility

With Hyper-V R2, they included a new Processor Compatibility feature. Processor compatibility allows you to move a virtual machine up and down multiple processor generations from the same vendor.

Here’s how it works (according to Microsoft): When a Virtual Machine (VM) is started on a host, the hypervisor exposes the set of supported processor features available on the underlying hardware to the VM. This set of processor features are called guest visible processor features and are available to the VM until the VM is restarted.

When a VM is started with processor compatibility mode enabled, Hyper-V normalizes the processor feature set and only exposes guest visible processor features that are available on all Hyper-V enabled processors of the same processor architecture, i.e. AMD or Intel.¬† This allows the VM to be migrated to any hardware platform of the same processor architecture. Processor features are “hidden” by the hypervisor by intercepting a VM’s CPUID instruction and clearing the returned bits corresponding to the hidden features.

Just so we’re clear: this still means AMD<->AMD and Intel<->Intel. It does not mean you can Live Migrate between different processor vendors AMD<->Intel or vice versa.

In addition, you may be aware that both AMD and Intel have provided similar capabilities in hardware, Extended Migration and Flex Migration respectively. Extended and Flex Migration are cool technologies available on relatively recent processors, but this is a case where providing the solution in software allows us to be more flexible and provide this capability to older systems too. Processor Compatibility also makes it easier to upgrade to the newest server hardware. In addition, Hyper-V Processor Compatibility can be done on a per VM basis (it’s a checkbox) and doesn’t require any BIOS changes.

For more information see Microsoft’s Virtualization Team Blog

Source: Advance Homelinux (dutch)  and Microsoft Virtualization Team Blog