A faster Web server: ripping out Apache for Nginx I am, at best, a fly-by-night sysadmin. I grew to adult nerdhood doing tech support and later admin work in a Windows shop with a smattering of *nix, most of which was attended to by bearded elders locked away in cold, white rooms. It wasn't until I started managing enterprise storage gear that I came to appreciate the power of the bash shell, and my cobbled-together home network gradually changed from a Windows 2003 domain supporting some PCs to a mixture of GNU/Linux servers and OS X desktops and laptops. Like so many others, I eventually decided to put my own website up on the Internets, and I used the Apache HTTP server to host it. But it wasn't quite right for me. Old and busted Apache was easy to set up. Things ran well this way for a couple of years, but as I started doing more with the Web server, it began to be apparent that my setup, while perfectly workable, could be better. Additionally, I began running a small wiki on the same box. There were many paths to take.
Optimizing NGINX TLS Time To First Byte (TTTFB) By Ilya Grigorik on December 16, 2013 Network latency is one of our primary performance bottlenecks on the web. In the worst case, new navigation requires a DNS lookup, TCP handshake, two roundtrips to negotiate the TLS tunnel, and finally a minimum of another roundtrip for the actual HTTP request and response — that's five network roundtrips to get the first few bytes of the HTML document! Modern browsers try very hard to anticipate and predict user activity to hide some of this latency, but speculative optimization is not a panacea: sometimes the browser doesn't have enough information, at other times it might guess wrong. The why and the how of TTFB According to the HTTP Archive, the size of the HTML document at 75th percentile is ~20KB+, which means that a new TCP connection will incur multiple roundtrips (due to slow-start) to download this file - with IW4, a 20KB file will take 3 extra roundtrips, and upgrading to IW10 will reduce that to 2 extra roundtrips. Much better!
Optimising NginX, Node.JS and networking for heavy workloads | GoSquared Engineering Used in conjunction, NginX and Node.JS are the perfect partnership for high-throughput web applications. They’re both built using event-driven design principles and are able to scale to levels far beyond the classic C10K limitations afflicting standard web servers such as Apache. Out-of-the-box configuration will get you pretty far, but when you need to start serving upwards of thousands of requests per second on commodity hardware, there’s some extra tweaking you must perform to squeeze every ounce of performance out of your servers. This article assumes you’re using NginX’s HttpProxyModule to proxy your traffic to one or more upstream node.js servers. We’ll cover tuning sysctl settings in Ubuntu 10.04 and above, as well as node.js application and NginX tuning. You may be able to achieve similar results if you’re using a Debian Linux distribution, but YMMV if you’re using something else. Tuning the network Higlighting a few of the important ones… net.ipv4.ip_local_port_range Using netstat:
WordPress on Nginx, Part 2: vhost, MySQL & APC Configurations What good a website with a “Welcome to nginx” note? That’s where we left last time. My primary reference for this Apache to Nginx migration was this article — in fact, my configs are more or less a copy-paste from this guide. For your convenience I’ll just repeat the steps here… Configuring the Nginx vhost Since it’s always nice to save a backup of the original default config files before we make any changes — because it’s easy to roll back to the reference point and troubleshoot when something goes wrong — we move the original nginx.conf file as follows: Then create a new /etc/nginx/nginx.conf file and insert the following text in it: The worker_processes 1 directive above is of special importance here. Second, if you notice, the nginx.conf above doesn’t have any WordPress specific configs yet. This second directory is where we’ll have our vhost configs, while in the former we’ll simply have individual vhost configs files’ symlinks. Remember the “Welcome to nginx!” With that done. All good.
Network Tuning and Performance Guide Home RSS Search November 12, 2013 Many of today's desktop systems and servers come with on board gigabit network controllers. After some simple speeds tests you will soon find out that you are not be able to transfer data over the network much faster than you did with a 100MB link. There are many factors which affect network performance including hardware, operating systems and network stack options. It is important to remember that you can not expect to reach gigabit speeds using slow hardware or an unoptimized firewall rule set. Hardware No matter what operating system you choose, the machine you run on will determine the theoretical speed limit you can expect to achieve. In terms of a firewall or bridge we are looking to move data through the system as fast as possible. The quality of a network card is key to high though put. A gigabit network controller built on board using the CPU will slow the entire system down. Not to say that all on-board chip sets are bad. Yes.
Nginx Support Enables Massive Web Application Scaling Your web applications need to scale, especially during demanding traffic events. Nginx is a high-performance web server and reverse proxy that can help you do that. Today, we are extending our Managed Cloud Fanatical Support to include the installation, troubleshooting, patching and performance tuning of Nginx. Specifics of what is supported can be found in the Knowledge Center article about Cloud Servers with Managed Service Level – Spheres of Support . At its heart, Nginx is a web server, a very fast web server! There are three common use-cases where Nginx really stands out, and your Managed Cloud Support Team can help implement for you: As a reverse proxy / cache in front of Apache As a reverse proxy /cache in front of an Application Server or Framework As a replacement for Apache and mod_php Nginx as a Reverse Proxy / Cache In Front of Apache As a Reverse Proxy /Cache in Front of an Application Server or Framework Nginx as a Replacement for Apache and mod_php Next Steps
nginx How to monitor ZFS with SNMP in FreeBSD? Install and Configure PHP-FPM on Nginx - Codestance PHP-FPM (FastCGI Process Manager) is an alternative FastCGI implementation with some additional features useful for websites of any size, especially high-load websites. It makes it particularly easy to run PHP on Nginx. Included features – from original website : - Adaptive process spawning - Basic statistics - Advanced process management with graceful stop/start - Ability to start workers with different uid/gid/chroot/environment and different php.ini - Stdout & stderr logging - Emergency restart in case of accidental opcode cache destruction - Accelerated upload support - Support for a “slowlog” - Enhancements to FastCGI, such as fastcgi_finish_request() – a special function to finish request & flush all data while continuing to do something time-consuming ..and much more.. Notice : PHP-FPM is not designed with virtual hosting in mind (large amounts of pools) however it can be adapted for any usage model. Let’s start with installation (Ubuntu/Debian) : Happy hacking!
Créer et Installer un certificat SSL sous NGinx | Admin Serveur Créer et Installer un certificat SSL sous NGinx Installer un certificat SSL sur NGinx est l'affaire de quelques minutes. Dans cet exemple, j'ai choisi NameCheap comme fournisseur de Certificat SSL. Les certificats SSL de type GeoTrust RapidSSL sont au prix de 10.95 USD (~7.95 €uros au moment de ce billet). Préparation des certificats SSL Rendez-vous sur votre serveur: cd /etc/nginx/ # Création d'un dossier ssl pour y mettre les certificats mkdir ssl cd ssl/ Génération des certificats: # Génération du fichier .key openssl genrsa -des3 -out admin-serv.net.key 2048 Generating RSA private key, 2048 bit long modulus ...+++ ..................................................................................................+++ e is 65537 (0x10001) Enter pass phrase for admin-serv.net.key: Verifying - Enter pass phrase for admin-serv.net.key: Votre fichier .key (protégé par mot de passe) est maintenant créé, nous passons à la génération du CSR: Votre fichier CSR est désormais créé. Votre commentaire
arm/Raspberry Pi - FreeBSD Wiki FreeBSD/ARM on Raspberry Pi FreeBSD-CURRENT has supported Raspberry Pi since November, 2012 and Raspberry Pi 2 since March, 2015. If you have questions, ask on the freebsd-arm mailing list. What is Raspberry Pi? The Raspberry Pi launched in early 2012 as an inexpensive ($35) PC based on a Broadcom BCM2835 SoC. There are several versions of the Raspberry Pi: The "Model B" includes Ethernet, 2 USB ports and originally included 256MB RAM. What works How to Boot the Raspberry Pi As of January 2013, FreeBSD-CURRENT fully supports either a video console (you'll need a USB keyboard and display connected) or it can be configured to use a serial console (you'll need a USB to TTL Serial Cable such as the one sold by Adafruit.com). After connecting video, keyboard, and inserting the SDHC card, you connect power to actually boot. Anatomy of a Raspberry Pi Boot Image A FreeBSD bootable image for Raspberry Pi has both FAT and UFS partitions containing the following files: How to Build an Image Binary snapshots