NAVID

How I Reduced API Response Time by 99.7% Using Redis Cache with Express.js and MongoDB

Learn how implementing Redis caching reduced my Express.js + MongoDB API response time from 956.96ms to 2.18ms - a 99.7% performance improvement. Includes practical code examples and implementation strategies.

By Navid
September 15, 2025
5 min read
How I Reduced API Response Time by 99.7% Using Redis Cache with Express.js and MongoDB

Performance is everything in modern web applications. When users expect lightning-fast responses, even a few hundred milliseconds can make the difference between user engagement and abandonment. Here's how I transformed a sluggish API into a speed demon using Redis caching.

The Performance Problem



My Express.js application with MongoDB was struggling with API response times averaging 956.96ms for database-heavy operations. Users were experiencing noticeable delays, especially on data-rich endpoints that required multiple database queries and complex aggregations.

Initial Setup:


- Backend: Express.js with Node.js
- Database: MongoDB with Mongoose ODM
- Bottleneck: Complex aggregation queries and frequent database hits
- Average Response Time: 956.96ms

The Redis Solution



Redis (Remote Dictionary Server) is an in-memory data structure store that can act as a database, cache, and message broker. By implementing Redis as a caching layer, I could store frequently accessed data in memory and serve it lightning-fast.

Implementation Strategy



1. Cache Frequently Accessed Data
javascript
const redis = require('redis');
const client = redis.createClient();

// Cache user profiles for 1 hour
app.get('/api/users/:id', async (req, res) => {
const cacheKey = user:${req.params.id};

try {
// Check cache first
const cachedUser = await client.get(cacheKey);
if (cachedUser) {
return res.json(JSON.parse(cachedUser));
}

// If not in cache, fetch from database
const user = await User.findById(req.params.id);

// Store in cache with expiration
await client.setex(cacheKey, 3600, JSON.stringify(user));

res.json(user);
} catch (error) {
res.status(500).json({ error: error.message });
}
});


2. Cache Complex Aggregation Results
javascript
// Cache expensive dashboard analytics
app.get('/api/dashboard/analytics', async (req, res) => {
const cacheKey = 'dashboard:analytics';

try {
const cached = await client.get(cacheKey);
if (cached) {
return res.json(JSON.parse(cached));
}

// Expensive MongoDB aggregation
const analytics = await Order.aggregate([
{ $match: { createdAt: { $gte: new Date(Date.now() - 24*60*60*1000) } } },
{ $group: { _id: '$status', count: { $sum: 1 }, total: { $sum: '$amount' } } }
]);

// Cache for 15 minutes
await client.setex(cacheKey, 900, JSON.stringify(analytics));

res.json(analytics);
} catch (error) {
res.status(500).json({ error: error.message });
}
});


3. Implement Cache-Aside Pattern with Middleware
javascript
const cacheMiddleware = (duration = 3600) => {
return async (req, res, next) => {
const key = cache:${req.originalUrl};

try {
const cached = await client.get(key);
if (cached) {
return res.json(JSON.parse(cached));
}

// Modify res.json to cache the response
const originalJson = res.json;
res.json = function(data) {
client.setex(key, duration, JSON.stringify(data));
return originalJson.call(this, data);
};

next();
} catch (error) {
next();
}
};
};

// Usage
app.get('/api/products', cacheMiddleware(1800), async (req, res) => {
const products = await Product.find().populate('category');
res.json(products);
});


The Results: 99.7% Performance Improvement



Before Redis Implementation:


- Average Response Time: 956.96ms
- Database Queries: 8-12 queries per request
- User Experience: Noticeable loading delays
- Server Load: High CPU usage during peak hours

After Redis Implementation:


- Average Response Time: 2.18ms
- Performance Improvement: 99.7%
- Cache Hit Rate: 89% for frequently accessed data
- Server Load: Reduced by 60%
- User Experience: Instant loading

Key Optimization Strategies



1. Smart Cache Key Design
javascript
// Include relevant parameters in cache keys
const getCacheKey = (endpoint, userId, filters) => {
const filterString = Object.keys(filters)
.sort()
.map(key => ${key}:${filters[key]})
.join('|');
return ${endpoint}:${userId}:${filterString};
};


2. Cache Invalidation Strategy
javascript
// Invalidate related caches when data updates
app.put('/api/users/:id', async (req, res) => {
try {
const user = await User.findByIdAndUpdate(req.params.id, req.body);

// Invalidate user cache
await client.del(user:${req.params.id});

// Invalidate related caches
const pattern = user_posts:${req.params.id}:*;
const keys = await client.keys(pattern);
if (keys.length > 0) {
await client.del(keys);
}

res.json(user);
} catch (error) {
res.status(500).json({ error: error.message });
}
});


3. Gradual Cache Warming
javascript
// Pre-populate cache for critical data
const warmCache = async () => {
console.log('Warming up cache...');

// Cache popular products
const popularProducts = await Product.find().sort({ views: -1 }).limit(50);
for (const product of popularProducts) {
await client.setex(product:${product._id}, 3600, JSON.stringify(product));
}

console.log('Cache warmed successfully');
};

// Run on server startup
warmCache();


Best Practices and Lessons Learned



Do's:


- Set appropriate TTL values - Balance between data freshness and performance
- Monitor cache hit rates - Aim for 80%+ hit rate for frequently accessed data
- Use Redis clustering for high-availability production environments
- Implement fallback mechanisms when cache is unavailable
- Cache at multiple levels - Application, database query, and API response levels

Don'ts:


- Don't cache everything - Only cache data that's accessed frequently
- Don't use overly long TTL for frequently changing data
- Don't forget cache invalidation - Stale data can cause serious issues
- Don't cache user-sensitive data without proper security measures

Monitoring and Metrics


// Track cache performance
const cacheStats = {
hits: 0,
misses: 0,
errors: 0
};

const trackCacheHit = () => cacheStats.hits++;
const trackCacheMiss = () => cacheStats.misses++;

app.get('/api/cache/stats', (req, res) => {
const hitRate = (cacheStats.hits / (cacheStats.hits + cacheStats.misses) * 100).toFixed(2);
res.json({
...cacheStats,
hitRate: ${hitRate}%
});
});


Production Considerations



Redis Configuration for Production:
- Use Redis Cluster for scalability
- Configure persistence (RDB + AOF)
- Set up monitoring with Redis Sentinel
- Implement proper security (AUTH, network isolation)
- Plan for memory management and eviction policies

Conclusion



Implementing Redis caching transformed my application's performance from sluggish to lightning-fast. The 99.7% improvement (956.96ms → 2.18ms) wasn't just about adding a cache layer - it required thoughtful implementation, proper cache invalidation, and continuous monitoring.

The key takeaways:
- Start with bottlenecks - Identify your slowest endpoints first
- Implement gradually - Begin with read-heavy operations
- Monitor continuously - Track hit rates and performance metrics
- Plan for scale - Design your caching strategy for production loads

Redis caching is a game-changer for API performance, but success lies in thoughtful implementation and continuous optimization. The investment in time pays dividends in user experience and server efficiency.

Performance Optimization#Redis#Express.js#MongoDB#Performance#Caching#Node.js#API Optimization#Database#Speed#Backend

0 Comments

Login to Comment

Don’t have an account?