<Marko />
← Back to case studies

PWA Studio & React Optimizations

ReactPWA StudioGraphQLPerformanceCore Web VitalsCode SplittingMemoizationLighthouse

On the frontend, Magento PWA Studio (Venia storefront) presents its own unique performance challenges. A React-based storefront with GraphQL data fetching can be blazingly fast or painfully slow, depending entirely on how components are structured, how data is fetched, and how re-renders are managed. This case study covers the optimizations that moved Core Web Vitals from failing to consistently passing on mid-range mobile devices.

GraphQL Query Co-Location

PWA Studio uses GraphQL for all data fetching from the Magento backend. The framework encourages component-level GraphQL fragments — a pattern where each component declares exactly which fields it needs from the graph, and parent components compose these fragments into complete queries. When used correctly, this eliminates over-fetching at the architectural level.

The anti-pattern we found in several projects was lifting queries to the page level. Instead of each component declaring its fragment, a single massive query at the page level fetched every field any component might need, then passed large data objects down through props. This created two compounding problems.

First, over-fetching: the page-level query included fields for components that might not even render (e.g., fetching review data even when reviews are disabled). On the Magento backend, each additional field in a GraphQL query triggers resolver execution — unnecessary fields mean unnecessary database queries, serialization, and network transfer.

Second, prop-drilling defeated memoization. When a parent passes a large product object as a prop to a child component, and any field on that object changes (even one the child doesn't use), the child re-renders. React.memo's shallow comparison sees a new object reference and re-renders everything. With co-located fragments, each component receives only the fields it declared, making shallow comparison effective.

Refactoring to co-located fragments required touching nearly every component, but the payoff was immediate: GraphQL response sizes dropped by 35% on average, and the number of unnecessary re-renders on category pages dropped by over 60%. The GraphQL client's cache also became more effective because smaller, more focused fragments meant higher cache hit rates across navigations.

Route-Based Code Splitting

Route-based code splitting is the single highest-leverage React optimization for storefronts. The product listing page, product detail page, checkout, and customer account sections share almost zero component weight — a customer browsing the catalog has no need for the checkout form validation library or the account dashboard components.

Yet the default bundling without code splitting ships everything as a single JavaScript bundle. On a mature PWA Studio project with 15+ pages and 40+ third-party integrations, this bundle easily exceeds 800KB of JavaScript. On a mid-range mobile device on a 4G connection, downloading and parsing 800KB of JavaScript takes 4–6 seconds — well beyond the 2.5-second LCP threshold that Google considers 'good' for Core Web Vitals.

We implemented route-based splitting using React.lazy() and Suspense for each major route: catalog listing, product detail, cart, checkout (multi-step), customer account, and CMS pages. Each lazy-loaded route gets its own webpack chunk that's only downloaded when the user navigates to that route.

The results were dramatic. The initial bundle (home page + shared framework code) dropped from 812KB to 142KB. First Contentful Paint improved by 2.1 seconds on a simulated Moto G4 device. LCP moved from 4.2 seconds to under 2.5 seconds — crossing the threshold from 'poor' to 'good' in Core Web Vitals assessment.

We added route prefetching for likely next navigations: when a user views a category page, the product detail page chunk starts loading in the background. When they're on the cart page, the checkout chunk prefetches. This means most navigations feel instant despite the code splitting, because the chunk is already cached by the time the user clicks.

Critical CSS extraction complemented the code splitting. Above-the-fold styles for the home page and category listing (the two most common entry points) are inlined in the HTML response. Below-the-fold styles load asynchronously. This eliminated the FOUC (Flash of Unstyled Content) that code splitting can introduce if CSS loading isn't handled carefully.

Surgical Cart Memoization

The shopping cart context is one of the most frequently updated state slices in a Magento PWA. Quantity changes, coupon applications, shipping estimate calculations, tax recalculations, and gift card applications all trigger cart state updates. Every update creates a new cart context value, and every component consuming that context re-renders.

In a typical PWA Studio checkout flow, the cart context is consumed by: the mini-cart icon (shows item count), the mini-cart dropdown (shows item list), the cart page (full item details + totals), the checkout shipping step (needs shipping methods), the checkout payment step (needs cart totals), and the order summary sidebar (shows itemized totals). A single quantity change triggers re-renders in all of these components simultaneously.

The naive fix — wrapping everything in React.memo() — doesn't work because React.memo performs shallow comparison, and the cart context value is a new object reference on every update. Even if a component only uses cart.itemCount and that value hasn't changed, it re-renders because the parent context object is new.

Our solution was surgical memoization with custom equality functions. Each cart-consuming component is wrapped in React.memo() with a comparator that checks only the specific cart fields that component uses. The mini-cart icon compares only itemCount. The shipping step compares only shippingMethods and selectedShippingMethod. The payment step compares only cartTotal and appliedCoupons.

We also introduced selector hooks that extract specific values from the cart context and memoize them individually: useCartItemCount(), useCartTotal(), useShippingMethods(). These hooks use useRef internally to track the previous value and only trigger a re-render when their specific slice actually changes. This pattern — inspired by Redux's useSelector — gives us the convenience of React Context with the render efficiency of a state management library.

The measured impact: during a checkout flow where a user changes quantity, applies a coupon, and selects shipping, the total number of component re-renders dropped from 847 to 124 — an 85% reduction. On a mid-range mobile device, this eliminated the perceptible input lag (200–300ms delay between typing and seeing the character appear) that users were experiencing in the checkout form fields.

Core Web Vitals Targeting & CI Budgets

Performance optimization without measurement is guesswork. We integrated Lighthouse CI into the deployment pipeline with per-metric budgets that block merges if violated. The budgets reflect Google's Core Web Vitals thresholds: LCP under 2.5 seconds, CLS under 0.1, FID (now INP) under 200ms, all measured on a simulated Moto G4 on 4G — representing the experience of a median mobile user, not a developer's M1 MacBook on fiber.

Each pull request triggers a Lighthouse CI run against a staging deployment. The results are posted as a PR comment showing current scores, comparison to the baseline, and pass/fail status for each budget. If any metric regresses beyond the budget threshold, the PR is blocked from merging. This prevents death-by-a-thousand-cuts performance degradation where each individual change is 'only 50ms slower' but the cumulative effect is a 2-second regression over a quarter.

Image optimization targeted LCP specifically. Product images — the largest contentful paint element on most e-commerce pages — are served as responsive srcsets with WebP format and AVIF where supported. Images below the fold use loading='lazy' with an IntersectionObserver-based polyfill for older browsers. Above-the-fold images use fetchpriority='high' to signal the browser to download them first.

Font loading strategy addresses CLS (Cumulative Layout Shift). Custom fonts are preloaded via <link rel='preload'> tags in the document head, and font-display: swap ensures text is visible immediately with a fallback font while the custom font loads. The fallback font is size-adjusted (using size-adjust, ascent-override, and descent-override CSS properties) to match the custom font's metrics as closely as possible, minimizing the layout shift when the font swap occurs.

Real User Monitoring (RUM) via web-vitals library provides production data that complements lab testing. Lab tests (Lighthouse) measure potential performance; RUM measures actual performance as experienced by real users on real devices. We found cases where lab scores were excellent but RUM data showed poor INP on specific Android devices — leading to targeted optimizations for those device profiles that lab testing alone would never have caught.

Results

  • LCP improved from 4.2s to under 2.5s on mobile
  • Initial bundle size reduced from 812KB to 142KB
  • CLS score dropped to near-zero with font and layout stabilization
  • 85% reduction in unnecessary component re-renders during checkout
  • GraphQL response sizes reduced by 35% via fragment co-location
  • Checkout input lag eliminated on mid-range mobile devices
  • Lighthouse performance score consistently above 90

Want to discuss a similar challenge? Get in touch →