Cache
- Caches can exist at all levels in architecture
Application Server Cache
- Each time a request is made to the service, the node will quickly return locally cached data if it exists. If it is not in the cache, the requesting node will fetch the data from the disk. The cache on one request layer node could also be located both in memory (which is very fast) and on the node’s local disk (faster than going to network storage)
- With load balancer, if the traffic goes to a different node, there’ll be a cache miss -> global cache / distributed cache
Placing a cache directly on a request layer node enables the local storage of response data. The cache can locate on both memory(very fast) and node’s local disk (faster than network storage)
Cache on multiple nodes: Each nodes can have its own cache. However if the load balancer distribute the requests to different nodes, the the other node might not know the cache exists in the original node. Thus we’ll need: global cache and distributed cache.
CDN
- If the system we are building is not large enough to have its own CDN, we can ease a future transition by serving the static media off a separate subdomain (e.g., static.yourservice.com) using a lightweight HTTP server like Nginx, and cut-over the DNS from your servers to a CDN later.
How does CDN work
a CDN is a network of servers linked together with the goal of delivering content as quickly, cheaply, reliably, and securely as possible. In order to improve speed and connectivity, a CDN will place servers at the exchange points between different networks. By having a connection to these high speed and highly interconnected locations, a CDN provider is able to reduce costs and transit times in high speed data delivery.
How does CDN reduce Latency
- Global distributed: reduce distance between users and website resources
- Hardware and software optimizations such as efficient load balancing and solid-state hard drives
- Reduce file sizes using tactics such as minification and file compression
- CDNs can also speed up sites which use TLS/SSL certificates by optimizing connection reuse and enabling TLS false start
Cache Invalidation
To avoid data inconstency, we’ll need cache invalidation.
- Write-through cache: Under this scheme, data is written into the cache and the corresponding database at the same time.
Advantage- We will have complete data consistency between the cache and the storage.
- Ensures that nothing will get lost in case of a crash, power failure, or other system disruptions.
Disadvantage - Higher latency for writing operations
- Write-around cache: Data is writen directly into permanent storage, bypassing the cache.
Advantage- Reduce the cache being flooded with write operations that will not subsequently be re-read
Disadvantage - A read request for recently written data will create a “cache miss” and must be read from slower back-end storage and experience higher latency
- Reduce the cache being flooded with write operations that will not subsequently be re-read
- Write-back cache: Data is written to cache alone and completion is immediately confirmed to the client.
Advantage- This results in low latency and high throughput for write-intensive applications
Disadvantage - This speed comes with the risk of data loss in case of a crash or other adverse event because the only copy of the written data is in the cache
- This results in low latency and high throughput for write-intensive applications
Cache Eviction Policies
- First In First Out (FIFO): The cache evicts the first block accessed first without any regard to how often or how many times it was accessed before.
- Last In First Out (LIFO): The cache evicts the block accessed most recently first without any regard to how often or how many times it was accessed before.
- Least Recently Used (LRU): Discards the least recently used items first.
- Most Recently Used (MRU): Discards, in contrast to LRU, the most recently used items first.
- Least Frequently Used (LFU): Counts how often an item is needed. Those that are used least often are discarded first.
- Random Replacement (RR): Randomly selects a candidate item and discards it to make space when necessary.
Two Approaches
(most systems rely heavily on both).
- Application Caching: Application caching requires explicit integration in the application code itself. Usually it will check if a value is in the cache; if not, retrieve the value from the database; then write that value into the cache
1 | key = "user.%s" % user_id |
- Database Caching
In-memory Caches
- MemCache: http://memcached.org/
- Redis: https://redis.io/: Redis can be configured to store some data to disk