background preloader


Facebook Twitter

LXC. Translation(s): none Linux Containers (LXC) provide a Free Software virtualization system for computers running GNU/Linux.


This is accomplished through kernel level isolation. It allows one to run multiple virtual units simultaneously. Those units, similar to chroots, are sufficiently isolated to guarantee the required security, but utilize available resources efficiently, as they run on the same kernel. For all related information visit : Full support for LXC (including userspace tools) is available since the Debian 6.0 "Squeeze" release.

Current issues in Debian 7 "Wheezy": Proxmox et OpenVZ - Linux Server Wiki. Pour commencer, partitionnez votre disque.

Proxmox et OpenVZ - Linux Server Wiki

Faites une partition / de 15Go, un swap, et enfin, une partition LVM2 avec l'espace restant. OpenVZ Proxmox Virtualization. After one year of operation with XEN, I chosed to move Fridu from XEN paravirtualization, to OpenVZ container model.

OpenVZ Proxmox Virtualization

Here after some explanations on the why of this change and the description of my new architecture. Disclaimer Anything I wrote here was done outside of my professional work context and none of my current/past employers/customers have participate or even be consulted for this work. Fridu is 100% part of my free time, and everything including hosting is funded on our pocket money and used to support non commercial friend organisations. While I think I have the technical background to design a smart architecture (cf:my profile). Demonstration/Video This demonstrations is a live screencast done with xvidcap on Linux, it shows how to create a new virtual machine through Proxmox OpenVZ web graphic interface, and then shows how to expose the newly created zone to the outside world with three different mechanisms: vpn, port forwarding and reverse proxy. Xen is rock solid, but ... Resource shortage. Sometimes you see strange failures from some programs inside your container.

Resource shortage

In some cases it means one of the resources controlled by OpenVZ has hit the limit. The first thing to do is to check the contents of the /proc/user_beancounters file in your container. The last column of output is the fail counter. Each time a resource hits the limit, the fail counter is incremented. So, if you see non-zero values in the failcnt column that means something is wrong. There are two ways to fix the situation: reconfigure (in some cases recompile) the application, or change the resource management settings. [edit] UBC parameters Here is an example of current UBC values obtained from /proc/user_beancounters file in container 123: You can see if you hit the limit for some UBC parameters by analyzing the last column (named failcnt). Get the current values for the parameter's barrier and limit. [edit] Disk quota If one of the commands shows a usage of 100% you have exceeded one of the disk quota limits. Migration of servers to Proxmox VE.

You can migrate existing servers to Proxmox VE.

Migration of servers to Proxmox VE

Moving Linux servers is always quite easy so you will not find much hints for troubleshooting here. Windows systems specific P2V issues inaccessible boot device Booting a virtual clone (IDE) of a physical Windows system partition may fail with a BSOD referring to the problem STOP: 0x0000007B (0xF741B84C,0xC0000034,0x00000000,0x00000000) INACCESSIBLE_BOOT_DEVICE this means that the source physical windows machine had no support for IDE controller, or at least the one virtually replaced by kvm (see Microsoft KB article article for details): as Microsoft suggests, create a mergeide.reg file ( file on the physical machine and merge that in the registry, 'before the P2V migration.

Windows 2000: see [1] disk booting tips check that your disk has "boot flag" enabled (you can check this with gparted, on ntfs disks, booting the vm from a livecd iso, see this gparted manual page) maybe not so windows-specific but better remind it here. Auto-hébergement : configurer un cluster Proxmox 2 sans multicast. Ou comment interconnecter, via Internet et en Unicast, plusieurs noeuds Proxmox installés sur des réseaux distincts.

Auto-hébergement : configurer un cluster Proxmox 2 sans multicast

MAJ 30 mars 2012 : la version stable de Proxmox VE 2 est disponible et le billet est à jour. Charlatan ! Le wiki de Proxmox 2 est formel : pas de multicast, pas de cluster Oui mais non. (D’abord…) Non seulement ça marche mais en plus, chose appréciable, la solution est plutôt élégante : suffit de faire tourner les noeuds Proxmox sur un réseau virtuel (supportant le Multicast) et d’interconnecter l’ensemble avec OpenVPN (en Unicast). Bon, là, à priori et si vous êtes normalement constitués, z’avez les yeux qui brillent et la langue qui pendouille, sans parler du filet de bave qui coule le long de votre barbe encore tâchée par la pizza de la veille. Mais, halte camarades, rendons d’abord à César ce qui lui appartient car en fait (et sauf erreur), c’est « ned Productions Ltd » qui a dégainé le premier avec un tutorial (en Anglais) fort sympathique sur le sujet : Bon, ayé ?