Normalized cache
Apollo Android provides two different kinds of caches: an HTTP cache and a normalized cache. The HTTP cache is easier to set up but also has more limitations. This page focuses on the normalized cache. If you're looking for a simpler albeit coarser cache, take a look at the HTTP cache.
Data Normalization:
The normalized cache stores objects by ID.
query BookWithAuthorName {
favoriteBook {
id
title
author {
id
name
}
}
}
query AuthorById($id: String!) {
author(id: $id) {
id
name
}
}
}
In the above example, requesting the author of your favorite book with the AuthorById
query will return a result from the cache if you requested your favorite book before. This works because the author is stored only once in the cache and all the fields where retrieved in the initial BookWithAuthorName query. If you were to request more fields, like birthdate
for an example, that wouldn't work anymore.
To learn more about the process of normalization, check this blog post
Storing your data in memory
Apollo Android comes with an LruNormalizedCache
that will store your data in memory:
// Create a 10MB NormalizedCacheFactory
val cacheFactory = LruNormalizedCacheFactory(EvictionPolicy.builder().maxSizeBytes(10 * 1024 * 1024).build())
// Build the ApolloClient
val apolloClient = ApolloClient.builder()
.serverUrl("https://...")
.normalizedCache(cacheFactory)
.build())
Persisting your data in a SQLite database
If the amount of data you store becomes too big to fit in memory or if you want your data to persist between app restarts, you can also use a SqlNormalizedCacheFactory
. A SqlNormalizedCacheFactory
will store your data in a SQLDelight database and is defined in a separate dependency:
dependencies {
implementation("com.apollographql.apollo:apollo-normalized-cache-sqlite:x.y.z")
}
Note: The apollo-normalized-cache-sqlite
dependency has Kotlin multiplatform support and has multiple variants (-jvm
, -android
, -ios-arm64
,...). If you are targetting Android and using custom buildTypes
, you will need to help Gradle resolve the correct artifact by defining matchingFallbacks:
android {
buildTypes {
create("custom") {
// your code...
matchingFallbacks = listOf("debug")
}
}
}
Once the dependency is added, create the SqlNormalizedCacheFactory
:
// Android
val sqlNormalizedCacheFactory = SqlNormalizedCacheFactory(context, "apollo.db")
// JVM
val sqlNormalizedCacheFactory = SqlNormalizedCacheFactory("jdbc:sqlite:apollo.db")
// iOS
val sqlNormalizedCacheFactory = SqlNormalizedCacheFactory("apollo.db")
// Build the ApolloClient
val apolloClient = ApolloClient.builder()
.serverUrl("https://...")
.normalizedCache(sqlNormalizedCacheFactory)
.build())
Chaining caches
To get the best of both caches, you can chain an LruNormalizedCacheFactory
with a SqlNormalizedCacheFactory
:
val sqlCacheFactory = SqlNormalizedCacheFactory(context, "db_name")
val memoryFirstThenSqlCacheFactory = LruNormalizedCacheFactory(
EvictionPolicy.builder().maxSizeBytes(10 * 1024 * 1024).build()
).chain(sqlCacheFactory)
Reads will read from the first cache hit in the chain. Writes will propagate down the entire chain.
Specifying your object IDs
By default, Apollo Android uses the field path as key to store data. Back to the original example:
query BookWithAuthorName {
favoriteBook {
id
title
author {
id
name
}
}
}
query AuthorById($id: String!) {
author(id: $id) {
id
name
}
}
}
This will store the following records:
"favoriteBook"
:{"id": "book1", "title": "Les guerriers du silence", "author": "ApolloCacheReference{favoriteBook.author}"}
"favoriteBook.author"
:{"id": "author1", name": "Pierre Bordage"}
"author("id": "author1")"
:{"id": "author1", "name": "Pierre Bordage"}
"QUERY_ROOT"
:{"favoriteBook": "ApolloCacheReference{favoriteBook}", "author(\"id\": \"author1\")": "ApolloCacheReference{author(\"id\": \"author1\")}"}
This is undesirable, both because it takes more space, and because modifying one of those objects will not notify the watchers of the other. What you want instead is this:
"book1"
:{"id": "book1", "title": "Les guerriers du silence", "author": "ApolloCacheReference{author1}"}
"author1"
:{"id": "author1", name": "Pierre Bordage"}
"QUERY_ROOT"
:{"favoriteBook": "book1", "author(\"id\": \"author1\")": "author1"}
To do this, specify a CacheKeyResolver
when configuring your NormalizedCacheFactory
:
val resolver: CacheKeyResolver = object : CacheKeyResolver() {
override fun fromFieldRecordSet(field: ResponseField, recordSet: Map<String, Any>): CacheKey {
// Retrieve the id from the object itself
return CacheKey.from(recordSet["id"] as String)
}
override fun fromFieldArguments(field: ResponseField, variables: Operation.Variables): CacheKey {
// Retrieve the id from the field arguments.
// In the example, this allows to know that `author(id: "author1")` will retrieve `author1`
// That sounds straightforward but without this, the cache would have no way of finding the id before executing the request on the
// network which is what we want to avoid
return CacheKey.from(field.resolveArgument("id", variables) as String)
}
}
val apolloClient = ApolloClient.builder()
.serverUrl("https://...")
.normalizedCache(cacheFactory, resolver)
.build()
For this resolver to work, every object in your graph needs to have a globally unique ID. If some of them don't have one, you can fall back to using the path as cache key by returning CacheKey.NO_KEY
.
Using the cache with your queries
You control how the cache is used with ResponseFetchers
:
// Get a response from the cache if possible. Else, get a response from the network
// This is the default behavior
val apolloCall = apolloClient().query(BookWithAuthorName()).responseFetcher(ApolloResponseFetchers.CACHE_FIRST)
Other possibilities are CACHE_ONLY
, NETWORK_ONLY
, CACHE_AND_NETWORK_ONLY
and NETWORK_FIST
. See to the ResponseFetchers
class for more details.
Reacting to changes in the cache
One big advantage of using a normalized cache is that your UI can now react to changes in your cache data. If you want to be notified every time something changes in book1
, you can use a QueryWatcher
:
apolloClient.query(BookWithAuthorName()).watcher().toFlow().collect { response ->
// This will be called every time the book or author changes
}
Interacting with the cache
To manipulate the cache directly, ApolloStore
exposes read()
and write()
methods:
// Reading data from the store
val data = apolloClient.apolloStore.read(BookWithAuthorName()).execute()
// Create data to write
val data = BookWithAuthorName.Data(
id = "book1",
title = "Les guerriers du silence",
author = BookWithAuthorName.Author(
id = "author1",
name = "Pierre Bordage"
)
)
// Write to the store. All watchers will be notified
apolloClient.apolloStore.writeAndPublish(BookWithAuthorName(), data).execute()
Troubleshooting
If you are experiencing cache misses, check your cache size and eviction policy. Some records might have been removed from the cache. Increasing the cache size and/or retention period will help hitting your cache more consistently.
If you are still experiencing cache misses, you can dump the contents of the cache:
val dump = apolloClient.getApolloStore().normalizedCache().dump();
NormalizedCache.prettifyDump(dump)
Make sure that no data is duplicated in the dump. If it is the case, it probably means that some objects have a wrong CacheKey
. Make sure to provide a CacheKeyResolver
that can work with your graph. All objects should have a unique and stable ID. That means that the ID should be the same no matter what path the object is in the graph. That also mean you have to include the identifier field in your queries to be able to use in from the CacheKeyResolver
.
Finally, make sure to design your queries so that you can reuse fields. A single missing field in the cache for a query will trigger a network fetch. Sometimes it might be useful to query an extra field early on so that it can be reused by other later queries.