Quick start guide · Valkey

Quick start guide

Getting Started with Valkey

What is Valkey?

Valkey is a fully open-source, in-memory data store backed by the Linux Foundation, offering microsecond-latency operations on rich data structures. See the Introduction for more.

Installation and Setup

Getting Valkey up and running is straightforward. See the Installation Guide for detailed instructions.

If you just want to try Valkey quickly, head over to Try-Valkey — an interactive playground where you can run any Valkey command right in your browser.

Data Operations

Once connected with Valkey, you can interact by issuing commands to store and retrieve data. Valkey behaves like a remote dictionary – you can think of it as a giant hash map on a server. Each piece of data is stored under a unique key, and you use commands to read or modify values associated with those keys.

Let’s walk through some fundamental operations **** with two of the most commonly used data types in Valkey: strings and hashes.

127.0.0.1:6379> SET user:1000 "Alice"
OK
127.0.0.1:6379> GET user:1000
"Alice"

Here we use the SET command to save the value "Alice" under the key user:1000, and GET to fetch it back. Valkey keys are often namespaced with prefixes (like user:) to group related items. You can store any data serialized as a string – numbers, JSON, binary blobs, etc. (Up to 512 MB per value, though very large values are not recommended for performance).

127.0.0.1:6379> HSET user:1000 name "Alice" email "alice@example.com" age "30"
(integer) 3
127.0.0.1:6379> HGET user:1000 name
"Alice"
127.0.0.1:6379> HGETALL user:1000
1) "name"
2) "Alice"
3) "email"
4) "alice@example.com"
5) "age"
6) "30"

We add three fields to the hash stored at user:1000. HGET retrieves a single field, and HGETALL returns all fields and values. Hashes are memory-efficient for storing structured data.

Example Use Case: Caching with Valkey

One of the most popular ways to use Valkey is as a caching layer in front of a traditional database or expensive API. By caching results in Valkey, applications can serve repeated requests much faster and reduce load on back-end systems.

Scenario: Imagine a web application that needs to fetch user profile data from a database. Without caching, each page load would query the database, making the app slow under load. With Valkey, you can cache the user data after the first retrieval:

  1. Check cache first: On a user profile request, the application first checks Valkey (using a key like user:42:profile) to see if the data is already cached.
  2. Fallback to DB if miss: If the key is not found in Valkey (a cache miss), the application queries the database for the data.
  3. Store in cache: The result from the database is then stored in Valkey for next time, with an expiration time (TTL).
  4. Subsequent hits: Future requests find the data in Valkey (cache hit) and can skip the database query, returning data to the user much faster.

In Valkey, setting a key with an expiration can be done in one command. For example, to cache a rendered page for 5 minutes (300 seconds):

127.0.0.1:6379> SET page:homepage "<html>...rendered content...</html>" EX 300
OK

The EX 300 option tells Valkey to automatically expire (remove) the key after 300 seconds. Until it expires, any request for page:homepage will be served the cached content from memory. You can adjust TTLs based on how fresh the data needs to be. Expiring keys ensures the cache doesn’t serve stale data indefinitely.

Valkey can cache nearly anything – from database query results and API responses to session tokens, rendered HTML, or even generated reports. Database caching is a classic use case, but it’s only the beginning. E-commerce platforms use Valkey to serve personalized recommendations instantly. Gaming companies rely on it for real-time leaderboards and matchmaking. Fintech systems trust Valkey to cache fraud detection signals and scoring results under heavy load. In AI and ML pipelines, Valkey accelerates inference by caching model outputs, storing precomputed embeddings, and managing access tokens across distributed systems. With sub-millisecond latency and the capacity to process hundreds of thousands of operations per second, Valkey is built to keep up — no matter how demanding the workload.

Best Practices and Troubleshooting

Best Practices:

Troubleshooting Common Issues:

For further diagnostics, see the official troubleshooting guide. We want to ensure that Valkey runs smoothly in your environment.

Next Steps

Now that you have Valkey running and understand the basics, you can explore more advanced topics and use cases:

Happy caching with Valkey! With its speed and flexibility, you now have a powerful tool to build fast, scalable applications. Next steps above will guide you as you deepen your Valkey knowledge and tackle more complex scenarios. Good luck on your Valkey journey!