Achieve Best Performance with PHP7 + NGINX

I don’t know much about hhvm..but i can help you with apache,nginx and php7.

I have used php7 + nginx combination and achieved around 1Million request per second in our stress and mixed load testing.

I have also tested same thing with apache2, Apache faces what is called the C10K problem – strictly speaking, difficulty supporting more than 10,000 connections at a time. (Apache falls well short of this goal.) Apache allocates memory to every additional connection, so it tends to start swapping to disk as concurrent connections increase. This sends site performance into a downward spiral and can lead the entire server to crash or freeze.

In opposite to that,NGINX runs an ongoing event loop that handles requests as they occur, without allocating resources to the requestors.

Nginx also has strong caching mechanism. It can be used for caching static as well as dynamic files. CloudFlare, a widely used CDN, uses NGINX as its operating system.

PHP 7 is said to be twice as fast as previous versions of PHP, and to use considerably less memory. It’s uses Abstract Syntax Tree mechanism for compilation which boost performance.It also has internal opcode caching feature.You can also use generator(yield) feature to perform co-operative parallel processing.

In order to get best performance.you have to fine tune your database,code and logic.keeping database connection persistent always help to increase throughout.

Memcache or redis Always plays important role in caching your results ,which will boost your performance.

At last most important key factor is infrastructure.You will need high end cpu in order to achieve best performance.

PHP 7 Performance with NGINX: Web Serving & Caching

8 Tips for Boosting Drupal 8 Performance using Nginx

NGINX
Drupal is a leading open-source content management system.Drupal is used for everything from personal blogs to gigantic enterprise and governmental projects, and other complex work.

Drupal is based on PHP, a scripting language which is easy to learn and easy to use for rapid prototyping, followed by a quick move to production. However, the basic operation of PHP can contribute to performance problems when a site needs to be able to scale rapidly due to short-term spikes in usage or long-term growth.

Also, most Drupal sites use the Apache HTTP Server web server, which has its own performance limitations. The tips in this blog post show how to solve common performance problems that face Drupal-based sites. With some imagination and hard work, sites can be quickly re-architected to remove performance bottlenecks and lay the groundwork for growth up to many times current traffic volumes.

Tip 1 – Plan Your Site Architecture

Most Drupal sites initially use Apache HTTP Server as their web server. Apache is used among a wide range of websites, and instructions for configuring it are widely available. However, as websites grow in performance, many sites move to NGINX. NGINX is the leader at busier sites (the top 100,000 sites, top 10,000 sites, and top 1,000 sites).

Apache and Drupal share similar problems when a site gets busier:

Apache faces what is called the C10K problem – strictly speaking, difficulty supporting more than 10,000 connections at a time. (Apache falls well short of this goal.) Apache allocates memory to every additional connection, so it tends to start swapping to disk as concurrent connections increase. This sends site performance into a downward spiral and can lead the entire server to crash or freeze.
Drupal does a fair amount of work to serve each request that it receives, with each request consuming memory and CPU time. Similarly to Apache, but at a much lower number of connections, Drupal site performance can fall into a downward spiral.
Also, when an application server also handles Internet traffic and other functions, it becomes a potential problem in multiple ways. It’s vulnerable to all of the different kinds of problems that a website can have, represents a single point of failure, and needs to be optimized for incompatible tasks, such as fast responsiveness to Internet requests, fast application processing, and fast disk access, as well as high security.

So to address performance bottlenecks as a site grows, you can take several separate but related steps, as described in this blog post:

Replace Apache with NGINX as the web server for your Drupal site. This improves performance and sharply reduces memory utilization when many thousands of connections run concurrently.
Implement a reverse proxy server. NGINX is a very popular reverse proxy server for Drupal sites and for sites of all kinds. Implementing a reverse proxy server removes the burden of handling Internet traffic from your application server and allows other performance-enhancing steps: caching of static files and the use of multiple load-balanced application servers.

Monitor traffic among servers. Once you’re running multiple servers, you need the ability to monitor performance across them.

Note: You can implement NGINX as a reverse proxy server with either Apache or NGINX as your web server. Decoupling these two functions can simplify implementation.

 

Tip 2 – Replace Your Web Server

The easy way to support multiple connections is to fork a new process for each new connection. This gives you access to all the capabilities of Apache for each new connection. However, the resulting connections are “heavy” – each connection has non-trivial start-up time, debugging complexity and, most importantly, memory requirements.

NGINX was developed to eliminate the overhead that this simplistic approach incurs. NGINX runs an ongoing event loop that handles requests as they occur, without allocating resources to the requestors.


So simply replacing Apache with NGINX is a “quick fix” for performance issues. You can make this change without changing your actual application code.

 

Tip 3 – Rewrite URLs

There is a configuration issue that arises when replacing Apache with NGINX. If site layout is complex, it’s desirable to display a simplified, user-friendly URL. Short URLs are a vital component for security, flexibility, and web usability.

 

In most web servers, URL rewriting is accomplished via the mod_rewrite module in the .htaccess (hypertext access) file, which is a directory-level configuration file.

 

The following NGINX rewrite rule uses the rewrite directive. It matches URLs that begin with the string /download and then include the /media/ or /audio/ directory somewhere later in the path. It replaces those elements with /mp3/ and adds the appropriate file extension, .mp3 or .ra. The $1 and $2 variables capture the path elements that aren’t changing. As an example, /download/cdn-west/media/file1 becomes /download/cdn-west/mp3/file1.mp3.


server {
   …
   rewrite ^(/download/.*)/media/(.*)\..*$ $1/mp3/$2.mp3 last;
   rewrite ^(/download/.*)/audio/(.*)\..*$ $1/mp3/$2.ra  last;
   return  403;
   …
}

Tip 4 – Deploy a Reverse Proxy Server

A reverse proxy server receives requests from browsers and then does not just immediately process them. Instead, the reverse proxy server examines each request and decides, Solomon-like, what to do with it: either carry out the request itself or send it to another server for fulfillment.



The reverse proxy server communicates with the application server quickly, over a local area network. When the application server finishes a request and hands the result back to the reverse proxy server, it doesn’t have to wait to communicate with the actual client over the Internet; instead, the application server can go right back to handling the next application-related request. The reverse proxy server then sends the response back to the client.

 

Inserting a reverse proxy server can immediately “rescue” a site that’s tipping over due to excessive traffic, security problems (which can now be addressed away from the application processing), or other issues. The reverse proxy server also introduces new flexibility into the site architecture. The new capabilities include:

 

» Caching static files
» Caching dynamic files/Microcaching
» Load Balancing
» Scalability
» Security management
» Security protocol termination.
» Monitoring and Management
» flexibility and redundancy Management

 

Tip 5 – Cache Static Files

The more common – nearly ubiquitous – use for NGINX as a reverse proxy server is to use it for caching of static files: graphics and code files – JPEGs, PNGs, CSS files, JavaScript files, and the like. This is very easy to do. It gets the cached files to the user faster and offloads the Drupal server from a significant number of transactions per pageview.

The following configuration block sets up NGINX caching. The proxy_cache_path specifies where cached content is stored in the local file system and allocates a named zone in shared memory. The proxy_cache directive is placed in the context that you want to cache for and references the shared memory zone. For more details, see this article on NGINX content caching.


http {
   …
   proxy_cache_path /data/nginx/cache keys_zone=one:10m;

   server {
       proxy_cache one;
       location / {
           proxy_pass http://localhost:8000;
       }
   }
}

For further caching efficiency for static files, consider a content delivery network (CDN). CloudFlare, a widely used CDN, uses NGINX as its operating system.

 

Tip 6 – Cache Dynamic Files

Drupal handles caching of PHP-generated web pages for you, and this can significantly improve site performance. As with microcaching for any platform, users don’t all get a newly created version of the web page; instead, they get a copy that’s perhaps a second or ten seconds old. This is usually not a problem, and is preferable to having your overall site performance begin to slow as traffic increases. (In which case, users are not getting fresh content for a different, and worse, reason.)

There are two problems with Drupal’s caching. First, Drupal is not efficient at caching. Second, when Drupal is getting overloaded, even the work needed to retrieve cached pages is significant. Whereas, with NGINX, you have a powerful and useful option: bypassing Drupal caching completely.

That’s right – NGINX handles both static file caching and PHP page caching.

Tip 7 – Use Multiple Application Servers and Load Balancing

With a single application server, you only have what’s called vertical scalability: to get more performance, you need a bigger, faster server. This approach is potentially expensive, because the biggest, fastest servers are disproportionately expensive. It’s also inherently limited, because a single device always has an inherent performance limit. When you need more performance, you have to upgrade or replace your current device, a disruptive operation.

Implementing a reverse proxy server, as described in Tip 2 above, allows you to use multiple application servers, giving you horizontal scalability: to get more performance, just add more servers. With the right software tools, such as those found in NGINX Plus, adding and removing servers can be done with no downtime at all.



However, when there are multiple application servers, there has to be some technique for deciding which server gets the next request. This is called load balancing, and techniques range from the simple – a round-robin approach where the next request goes to the next server in line – to sophisticated techniques in which the system checks servers to see which one is least busy (and therefore most available) before forwarding a request.

Load balancing can be performed by hardware appliances or by software load balancers running on standard hardware. Software-based approaches are more flexible – easily used in customer-owned servers, private clouds, and public clouds.

NGINX and NGINX Plus support five load-balancing methods, plus server weights. Four methods are supported in both open source NGINX and NGINX Plus; the Least Time method is supported only in NGINX Plus:

»Round Robin – Each new request goes to the next server in the list, regardless of how busy each server is.
»Least Connection – The next request goes to the server with the fewest active connections.
»Hash – The next request is assigned based on a user-defined key (generic hash) or the IP address (the IP Hash method).
»IP Hash – The next request is assigned based on the client IP address.
»Least Time (NGINX Plus only) – NGINX Plus tracks response time for forwarded client requests, then combines that with “least connections” information to determine where to send new requests.

The following code shows the weight parameter on the server directive for backend1.example.com:


upstream backend {
   server backend1.example.com weight=5;
   server backend2.example.com;
   server 192.0.0.1 backup;
}

Tip 8 – Support Session Persistence

Adding multiple application servers introduces a problem: what if your app supports interactive functionality, such as a purchasing session, which assumes that the same server handles all requests for a given user throughout a browser session?

Note: When session persistence is in use, loads are still being balanced, but it’s user sessions that are being allocated across application servers, not individual user requests. For a busy system, the difference in granularity doesn’t have much impact.

Session persistence keeps a specific client assigned to the same server throughout its session. Only NGINX Plus offers session persistence

Cookie insertion. When a client makes its first request, NGINX Plus creates a session cookie and returns it to the client. The client includes it in future requests and NGINX Plus uses it to route those requests to the server that responded to the first request.