<Marko />
← Back to case studies

PHP & Magento 2 Performance Optimizations

PHPMagento 2OPcacheBlackfireXhprofMySQLRedisProxySQLPerformance

PHP performance isn't just about upgrading versions — it's a systematic process of profiling, identifying bottlenecks, and applying targeted optimizations at every layer of the stack. In Magento 2 specifically, the framework's dependency injection system, plugin interceptor chains, and EAV database schema create unique performance challenges that require deep domain expertise to resolve.

Profiling First, Always

The single most important rule of performance optimization is: never optimize without profiling data. Intuition about where time is spent is almost always wrong, especially in a framework as complex as Magento 2 where a simple product page load triggers hundreds of class instantiations and database queries behind the scenes.

We instrumented the application with Blackfire and Xhprof profiling across three request types: category listing pages (the most common customer-facing request), product detail pages (the most complex rendering), and checkout submission (the most business-critical). Each profile captured wall time, CPU time, memory allocation, and I/O wait — giving a complete picture of where time was actually spent.

The results were revealing. 40% of request time was consumed by object instantiation through Magento's dependency injection container — constructing objects that were never actually used in the request path. Another 25% went to redundant database queries, particularly on EAV attribute loads where the same product attributes were fetched multiple times per request. The remaining 15% was lost to unoptimized autoloading — the Composer autoloader scanning through hundreds of directories for each class resolution.

This data completely changed our optimization priorities. Instead of focusing on infrastructure (more RAM, faster CPUs), we focused on code-level changes that addressed the actual bottlenecks. Infrastructure optimization came later as a complement, not a substitute.

OPcache & Runtime Tuning

OPcache is PHP's built-in bytecode cache, and its configuration is one of the highest-leverage performance knobs available — yet it's chronically misconfigured in most Magento installations. The default settings are designed for small applications, not for Magento's massive class graph.

The interned_strings_buffer setting was the biggest single win. Magento's DI-generated code produces thousands of unique class names, method names, and string constants. The default 8MB buffer causes OPcache to constantly evict and re-intern strings, adding measurable overhead to every request. Bumping this to 32–64MB eliminated the eviction entirely, cutting TTFB by 15–20% on cache-cold requests.

We also increased opcache.max_accelerated_files from the default 10,000 to 130,000 — Magento 2 with a typical set of third-party modules easily has 80,000+ PHP files. When this limit is reached, OPcache starts evicting cached files and recompiling them on next access, creating unpredictable latency spikes. The file cache (opcache.file_cache) was enabled for CLI processes (cron, queue consumers) which don't benefit from shared memory but still pay the compilation cost.

PHP-FPM pool sizing was recalculated based on actual memory profiling. We used pm.max_children = (available_memory - OS_overhead) / average_request_memory, measured per request type. Frontend requests averaged 85MB, admin requests averaged 180MB. Setting pool sizes based on these real numbers instead of guesswork prevented both OOM kills and wasted capacity.

JIT compilation (PHP 8.1+) was enabled selectively via opcache.jit_buffer_size=64M with tracing mode for compute-heavy operations like price rule calculations and catalog search indexing. For typical web requests dominated by I/O wait, JIT provides minimal benefit — but for batch operations, it delivered a measurable 10–15% improvement.

Plugin Chain Auditing

Magento 2's plugin (interceptor) system is the framework's primary extension mechanism — and its biggest hidden performance tax. Every plugin registered on a method wraps that method call with generated interceptor code, adding overhead to every invocation. In a typical Magento installation with 30+ third-party modules, some methods accumulate dozens of plugins.

We built a profiling report that listed every method with more than 5 plugins, sorted by call frequency × plugin count. The worst offender was Magento\Catalog\Model\Product::getPrice(), which had 34 plugins registered across core and third-party modules. Since getPrice() is called hundreds of times per category page (once per product, often multiple times for tier pricing), each unnecessary plugin multiplied across thousands of invocations per request.

The most impactful optimization was converting 'around' plugins to 'before' or 'after' variants wherever possible. An 'around' plugin wraps the entire method execution, forcing PHP to maintain a closure and call stack for each one. A 'before' or 'after' plugin is a simple function call with negligible overhead. In many cases, third-party module developers used 'around' plugins simply because it was the most flexible option, not because they actually needed to modify the method's return value conditionally.

We also identified and removed 12 plugins that were completely unnecessary — registering hooks on methods to log debug data, check feature flags that were always enabled, or add functionality that duplicated existing core behavior. Removing these dead plugins produced a 22% reduction in time spent in interceptor code across all measured request types. This kind of optimization is invisible at the infrastructure level and can only be achieved through deep Magento code analysis.

MySQL & EAV Query Optimization

Magento's Entity-Attribute-Value (EAV) database schema is one of its most controversial design decisions. Instead of storing product attributes in columns on a single table, each attribute value gets its own row in a separate table, joined by entity_id and attribute_id. Loading a single product with 50 attributes requires joining 5–8 tables (one per data type: varchar, int, decimal, text, datetime).

MySQL's EXPLAIN output on these joins told the real story. Many of the EAV joins were performing full table scans because the default indexes on catalog_product_entity_varchar and similar tables only covered entity_id, not the combination of entity_id + attribute_id + store_id that Magento's queries actually filter on. Adding composite indexes on these three columns routinely cut catalog page query time by 60–70%.

For read-heavy storefronts (which is most of them — the catalog browsing far exceeds admin catalog management), we enabled Magento's flat catalog indexer. This pre-joins the EAV tables into a single flat table during indexing, turning 8-table JOIN queries into simple single-table SELECTs. The trade-off is slower indexing and more storage, but for sites with under 100,000 SKUs, this is almost always the right choice.

Beyond indexing, we identified and eliminated N+1 query patterns in third-party modules. A common anti-pattern: loading a product collection, then iterating over each product and lazy-loading related data individually. This turned a 1-query collection load into 1 + N queries where N is the number of products. Refactoring these to use Magento's collection addFieldToSelect() and joinTable() methods batch the queries properly.

Finally, we configured ProxySQL as a connection pooler between PHP-FPM and MySQL. PHP-FPM creates a new database connection per request, and the overhead of TCP handshake + MySQL authentication adds 2–5ms per connection. ProxySQL maintains a persistent connection pool, reusing existing connections and eliminating this overhead entirely. On a site handling 200 requests/second, this saved 400–1000ms of aggregate connection setup time per second.

Results

  • 40–60% reduction in average response time across all page types
  • 15–20% TTFB improvement from OPcache tuning alone
  • 60–70% reduction in catalog page query time via composite indexes
  • 22% reduction in interceptor overhead through plugin chain auditing
  • 3x improvement in sustainable requests per second
  • P99 latency dropped from 2.4s to 800ms

Want to discuss a similar challenge? Get in touch →