Qemu-guest-agent. Windows 2012 guest best practices. Introduction This is a set of best practices to follow when installing a Windows Server 2012 guest on a Proxmox VE server 3.x (3.4 at time of writing).
Right now it's a work in progress but hopefully soon it will be a comprehensive and reliable document. Please feel free to add to it, even if just to pose a potential best practice. Install Prepare First, download the Windows VirtIO Drivers iso image After clicking "Create VM" enter a Name: for your vm, select your Resource Pool (if you have one) and click Next Select Microsoft Windows 8/2012 in the OS tab and click Next.
Launch Windows install Start your newly created virtual machine using the "Start" link in the upper right. Install addtional VirtIO drivers If you don´t got all virtio drivers selected on install, you can install them also later. For more information and configuration about ballooning, see Dynamic Memory Management further info. Getting started - pve-monitor. Netboot.xyz. Proxmox VE 4.x Cluster. Introduction Proxmox VE 4.x (and all versions above) cluster enables central management of multiple physical servers.
A Proxmox VE Cluster consists of several nodes (up to 32 physical nodes, probably more, dependent on network latency). Main features Requirements NOTE: It is not possible to mix Proxmox VE 3.x and earlier with Proxmox VE 4.0 cluster. Informatique/Softwares/Proxmox/PVE 2.x - Problèmes et diagnostiques — Ordinoscope.net. Unable to write '/etc/pve/priv/authorized_keys.tmp.706086' - File too large.
Sans titre. Bug #1318551 “Kernel Panic - not syncing: An NMI occurred, pleas...” : Bugs : linux package. Ubuntu Server 14.04 amd64 Linux global04-jobs2 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux.
Migration of servers to Proxmox VE. Introduction You can migrate existing servers to Proxmox VE.
Moving Linux servers is always quite easy so you will not find much hints for troubleshooting here. Windows systems specific P2V issues inaccessible boot device Booting a virtual clone (IDE) of a physical Windows system partition may fail with a BSOD referring to the problem STOP: 0x0000007B (0xF741B84C,0xC0000034,0x00000000,0x00000000) INACCESSIBLE_BOOT_DEVICE this means that the source physical windows machine had no support for IDE controller, or at least the one virtually replaced by kvm (see Microsoft KB article article for details): as Microsoft suggests, create a mergeide.reg file (File:Mergeide.zip) file on the physical machine and merge that in the registry, 'before the P2V migration. Bug #1318551 “Kernel Panic - not syncing: An NMI occurred, pleas...” : Bugs : linux package. Proxmox VE 4.x Cluster. Introduction Proxmox VE 4.x (and all versions above) cluster enables central management of multiple physical servers.
A Proxmox VE Cluster consists of several nodes (up to 32 physical nodes, probably more, dependent on network latency). Main features Requirements. Persistently bridge traffic between two or more Ethernet interfaces (Debian) Objective To persistently bridge traffic between two or more Ethernet interfaces on a Debian-based system Background and Scenario See Bridge traffic between two or more Ethernet interfaces on Linux.
Method Overview. MicroHOWTO: Persistently bridge traffic between two or more Ethernet interfaces (Debian) Les aventures d'un Geek Unixien: Mise en place d'un bridge sous Debian/Ubuntu. Dans ma quête perpétuelle de la mise en place de truc qui fonctionne à moitié, j'ai eu la chance de me frotter aux bridges ethernets et aux interfaces ethernets virtuelles de l'ami Linux.
Mais qu'est ce qu'un bridge ethernet allez-vous me dire et pourquoi vouloir en mettre un en place ? Un bridge ethernet est l'équivalent d'un switch virtuel au niveau système. Index of /cdimage/unofficial/non-free/cd-including-firmware. Unofficial non-free images including firmware packages Here are some extra images, equivalent to the normal images we produce regularly except in that they also include non-free firmware to make things easier on some systems requiring proprietary but redistributable firmware.
See for more details. There are two types of image here: "netinst" install CDs that also include firmware to make installation easier live images including firmware packages pre-installed The current and current-live directories contain images that match up with the latest stable Debian release. Name Last modified Size. Images pour firmwares non reconnus — wiki.debian-fr. Missing firmware in Debian? Learn how to deal with the problem. You know it already, since Debian 6.0 non-free firmware are no longer provided by a standard Debian installation.
This will cause some troubles to users who need them. I’m thus going to do a small overview on the topic and teach you what you need to know to deal with the problem. What are firmware and how are they used? From the user’s point of view, a firmware is just some data that is needed by some piece of hardware in order to function properly. The driver for that hardware typically loads the firmware on the device as part of its initialization.
In the Linux kernel, the drivers are all using a standardized interface (request_firmware) to retrieve the firmware before sending it to the device. Debian (like most distributions) has selected the latter option. Storage Model. Proxmox VE is using a very flexible storage model. Virtual machine images can be stored on local storage (and more than one local storage type is supported) as well as on shared storage like NFS and on SAN (e.g. using iSCSI). All storage definitions are synchronized throughout the Proxmox_VE_2.0_Cluster, therefore it's just a matter of minutes before a SAN configuration is usable on all Proxmox_VE_2.0_Cluster nodes. Comparaison de différents FS Distribués : HDFS – GlusterFS – Ceph. Novembre 25, 2014 par Ludovic Houdayer.
GlusterFS or Ceph: Who Will Win the Open Source Cloud Storage Wars? The open source cloud storage wars are here, and show no sign of stopping soon, as GlusterFS and Ceph vie to become the distributed scale-out storage software of choice for OpenStack. The latest volley was fired this month by Red Hat (RHT), which commissioned a benchmarking test that reports more than 300 percent better performance with GlusterFS-based storage. Actually, it might be most precise to describe the GlusterFS-Ceph competition not just as a war, but as a proxy war.
In many ways, the real fight is not between the two storage platforms themselves, but their respective, much larger backers: Red Hat, which strongly supports GlusterFS development, and Canonical (the company behind Ubuntu Linux), which has placed its bets on Ceph. (Ceph itself is directly sponsored by Inktank, which has a close relationship with Canonical, and in which Ubuntu founder Mark Shuttleworth has invested $1 million of his own money.) But these details don't really matter from the channel perspective. Gluster Vs. Ceph: Open Source Storage Goes Head-To-Head. Storage appliances using open-source Ceph and Gluster offer similar advantages with great cost benefits. Which is faster and easier to use? Open-source Ceph and Red Hat Gluster are mature technologies, but will soon experience a kind of rebirth. GlusterFS 3.2 — La géo‐réplication. Introduction à la virtualisation du stockage. Au fur et à mesure que le service évolue, il devient de plus en plus difficile de gérer le stockage et ses nombreuses évolutions, tant au niveau de l’espace que des périphériques.
La virtualisation du stockage en réseau permet d’avoir une vision homogène de ressources hétérogènes (baies de disques / NAS) dispersées sur un SAN. Elle apporte des fonctionnalités intéressantes de copie et de réplication indépendamment des périphériques de stockage utilisés et de leur constructeur. Concept La meilleure définition de la virtualisation est probablement l’abstraction du stockage logique par rapport au stockage physique. La virtualisation du stockage est obtenue en isolant les données de leur localisation physique. Fonctionnalités La gestion des espaces de stockage est organisée autour de notions de pool. What is storage virtualization? - Definition from WhatIs.com. Desktop Virtualization (VDI) VDI environments built on top of conventional frame-based enterprise arrays, with performance-throttling SAN controllers, are unsuited for VDI environments. These enterprise arrays simply can’t respond to the read/write demand during boot storms, which occur in many VDI environments multiple times a day: in educational environments, as often as once an hour.
Boot times, for any given desktop, balloon: ten, twenty, even thirty minutes to complete a boot cycle for a few dozen, or a few hundred, virtualized desktops. Performance so poor that users find their working environment suddenly unworkable. The solution, as far as conventional enterprise array vendors are concerned is: more storage. Racks and racks of it. Experienced VDI practitioners know that traditional enterprise storage arrays don’t work, for VDI implementations. Virtualisation du stockage : fédérer les volumes en une unique ressource. 01net. Serveur de stockage sur réseau (NAS) Synology. Migration of servers to Proxmox VE.