際際滷

際際滷Share a Scribd company logo
M.C. Kang
   Redis Overview
Redis is an extremely high-performance, lightweight data store.
It provides key/value data access to persistent byte arrays, lists, sets, and hash data structures.
It supports atomic counters and also has an efficient topic-based pub/sub messaging
functionality.
Redis is simple to install and run and is, above all, very, very fast at data access.
What it lacks in complex querying functionality (like that found in Riak or MongoDB), it
makes up for in speed and efficiency.
Redis servers can also be clustered together to provide for very flexible deployment.
Its easy to interact with Redis from the command line using the redis-cli binary that comes
with the installation.
   ConnectionFactory
@Configuration
public class ApplicationConfig {
      private static final StringRedisSerializer STRING_SERIALIZER = new
      StringRedisSerializer();

     @Bean
     public JedisConnectionFactory connectionFactory() {
           JedisConnectionFactory connectionFactory = new JedisConnectionFactory();
           connectionFactory.setHostName("localhost");
           connectionFactory.setPort(6379);
           return connectionFactory;
     }

     @Bean
     public RedisTemplate<String, Long> longTemplate() {
           RedisTemplate<String, Long> tmpl = new RedisTemplate<String, Long>();
           tmpl.setConnectionFactory(connFac);
           tmpl.setKeySerializer(STRING_SERIALIZER);
           tmpl.setValueSerializer(LongSerializer.INSTANCE);
           return tmpl;
     }
}
                                                                         Val.
                                        Key Type
                                                                         Type
   RedisTemplate
Since the feature set of Redis is really too large to effectively encapsulate into a single
class, the various
operations on data are split up into separate Operations classes as follows

 ValueOperations
 ListOperations
 SetOperations
 ZSetOperations
 HashOperations
 BoundValueOperations
 BoundListOperations
 BoundSetOperations
 BoundZSetOperations
 BoundHashOperations
   Object Conversion
Because Redis deals directly with byte arrays and doesnt natively perform Object to byte[] translation, the
Spring Data Redis project provides some helper classes to make it easier to read and write
data from Java code.
By default, all keys and values are stored as serialized Java objects.

public enum LongSerializer implements RedisSerializer<Long> {
      INSTANCE;
      @Override
      public byte[] serialize(Long aLong) throws SerializationException {
            if (null != aLong) {
                       return aLong.toString().getBytes();
            } else {
                       return new byte[0];
            }
      }
      @Override
      public Long deserialize(byte[] bytes) throws SerializationException {
            if (bytes.length > 0) {
                       return Long.parseLong(new String(bytes));
            } else {
                       return null;
            }
      }
}
   Automatic type conversion when setting and getting values
public class ProductCountTracker {
     @Autowired
     RedisTemplate<String, Long> redis;


     public void updateTotalProductCount(Product p) {
           // Use a namespaced Redis key
           String productCountKey = "product-counts:" + p.getId();
           // Get the helper for getting and setting values
           ValueOperations<String, Long> values = redis.opsForValue();
           // Initialize the count if not present
           values.setIfAbsent(productCountKey, 0L);
           // Increment the value by 1
           Long totalOfProductInAllCarts = values.increment(productCountKey, 1);
     }
}
   Using the HashOperations interface
private static final RedisSerializer<String> STRING_SERIALIZER = new StringRedisSerializer();
public void updateTotalProductCount(Product p) {
     RedisTemplate tmpl = new RedisTemplate();
     tmpl.setConnectionFactory(connectionFactory);
     // Use the standard String serializer for all keys and values
     tmpl.setKeySerializer(STRING_SERIALIZER);
     tmpl.setHashKeySerializer(STRING_SERIALIZER);
     tmpl.setHashValueSerializer(STRING_SERIALIZER);
     HashOperations<String, String, String> hashOps = tmpl.opsForHash();
     // Access the attributes for the Product
     String productAttrsKey = "products:attrs:" + p.getId();
     Map<String, String> attrs = new HashMap<String, String>();
     // Fill attributes
     attrs.put("name", "iPad");
     attrs.put("deviceType", "tablet");
     attrs.put("color", "black");
     attrs.put("price", "499.00");
     hashOps.putAll(productAttrsKey, attrs);
}
   Using Atomic Counters
public class CountTracker {
     @Autowired
     RedisConnectionFactory connectionFactory;
     public void updateProductCount(Product p) {
           // Use a namespaced Redis key
           String productCountKey = "product-counts:" + p.getId();
           // Create a distributed counter.
           // Initialize it to zero if it doesn't yet exist
           RedisAtomicLong productCount =
           new RedisAtomicLong(productCountKey, connectionFactory, 0);
           // Increment the count
           Long newVal = productCount.incrementAndGet();
     }
}
   Pub/Sub Functionality
Important benefit of using Redis is the simple and fast publish/subscribe functionality.
Although it doesnt have the advanced features of a full-blown message broker, Redis pub/sub
capability can be used to create a lightweight and flexible event bus.
Spring Data Redis exposes a couple of helper classes that make working with this
functionality extremely easy.

Following the pattern of the JMS MessageListenerAdapter, Spring Data Redis has a
MessageListenerAdapter abstraction that works in basically the same way

@Bean
public MessageListener dumpToConsoleListener() {
       return new MessageListener() {
              @Override
              public void onMessage(Message message, byte[] pattern) {
                            System.out.println("FROM MESSAGE: " + new String(message.getBody()));
              }
       };
}

@Bean
MessageListenerAdapter beanMessageListener() {
       MessageListenerAdapter listener = new MessageListenerAdapter( new BeanMessageListener());
       listener.setSerializer( new BeanMessageSerializer() );
       return listener;
}

@Bean
RedisMessageListenerContainer container() {
       RedisMessageListenerContainer container = new RedisMessageListenerContainer();
       container.setConnectionFactory(redisConnectionFactory());
       // Assign our BeanMessageListener to a specific channel
       container.addMessageListener(beanMessageListener(),new ChannelTopic("spring-data-book:pubsub-test:dump"));
       return container;
}
    Springs Cache Abstraction with Redis
Spring 3.1 introduced a common and reusable caching abstraction. This makes it easy to cache the
results of method calls in your POJOs without having to explicitly manage the process of
checking for the existence of a cache entry, loading new ones, and expiring old cache entries.

Spring Data Redis supports this generic caching abstraction with the
o.s.data.redis.cache.RedisCacheManager.
To designate Redis as the backend for using the caching annotations in Spring, you just need
to define a RedisCacheManager bean in your ApplicationContext. Then annotate your POJOs like
you normally would, with @Cacheable on methods you want cached.
@Configuration
@EnableCaching
public class CachingConfig extends ApplicationConfig {

}


@Bean
public RedisCacheManager redisCacheManager() {
       RedisTemplate tmpl = new RedisTemplate();
       tmpl.setConnectionFactory( redisConnectionFactory() );
       tmpl.setKeySerializer( IntSerializer.INSTANCE );
       tmpl.setValueSerializer( new JdkSerializationRedisSerializer() );
       RedisCacheManager cacheMgr = new RedisCacheManager( tmpl );
       return cacheMgr;
}

 @Cacheable(value = "greetings")
 public String getCacheableValue() {
    long now = System.currentTimeMillis();
    return "Hello World (@ " + now + ")!";
 }
Ad

Recommended

MongoDB Quick Reference Card
MongoDB Quick Reference Card
Jeremy Taylor
Kitura Todolist tutorial
Kitura Todolist tutorial
Robert F. Dickerson
Store and Process Big Data with Hadoop and Cassandra
Store and Process Big Data with Hadoop and Cassandra
Deependra Ariyadewa
Lab1-DB-Cassandra
Lab1-DB-Cassandra
Lilia Sfaxi
ADO.Net Improvements in .Net 2.0
ADO.Net Improvements in .Net 2.0
David Truxall
Create & Execute First Hadoop MapReduce Project in.pptx
Create & Execute First Hadoop MapReduce Project in.pptx
vishal choudhary
Processing large-scale graphs with Google(TM) Pregel
Processing large-scale graphs with Google(TM) Pregel
ArangoDB Database
Spring and Cloud Foundry; a Marriage Made in Heaven
Spring and Cloud Foundry; a Marriage Made in Heaven
Joshua Long
Map-Reduce and Apache Hadoop
Map-Reduce and Apache Hadoop
Svetlin Nakov
Database c# connetion
Database c# connetion
Christofer Toledo
Thats My App - Running in Your Background - Draining Your Battery
Thats My App - Running in Your Background - Draining Your Battery
Michael Galpin
Rxjs vienna
Rxjs vienna
Christoffer Noring
Lab2-DB-Mongodb
Lab2-DB-Mongodb
Lilia Sfaxi
Session06 handling xml data
Session06 handling xml data
kendyhuu
Data Processing with Cascading Java API on Apache Hadoop
Data Processing with Cascading Java API on Apache Hadoop
Hikmat Dhamee
Change tracking
Change tracking
Sonny56
Rxjs marble-testing
Rxjs marble-testing
Christoffer Noring
Functions & closures
Functions & closures
Knoldus Inc.
Bootstrap
Bootstrap
NexThoughts Technologies
Ian 2014.10.24 weekly report
Ian 2014.10.24 weekly report
LearningTech
Sanjar Akhmedov - Joining Infinity Windowless Stream Processing with Flink
Sanjar Akhmedov - Joining Infinity Windowless Stream Processing with Flink
Flink Forward
Talk KVO with rac by Philippe Converset
Talk KVO with rac by Philippe Converset
CocoaHeads France
Xml processing in scala
Xml processing in scala
Knoldus Inc.
Cassandra Community Webinar | Become a Super Modeler
Cassandra Community Webinar | Become a Super Modeler
DataStax
Building .NET Apps using Couchbase Lite
Building .NET Apps using Couchbase Lite
gramana
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
Accumulo Summit
Durable functions
Durable functions
覈 蟾
MongoDB - Aggregation Pipeline
MongoDB - Aggregation Pipeline
Jason Terpko
Cloudfront private distribution 螳
Cloudfront private distribution 螳
覈豌 螳
Managementcontrol Brandweer Steller Andr辿 Maranus.2
Managementcontrol Brandweer Steller Andr辿 Maranus.2
awmaranus

More Related Content

What's hot (20)

Map-Reduce and Apache Hadoop
Map-Reduce and Apache Hadoop
Svetlin Nakov
Database c# connetion
Database c# connetion
Christofer Toledo
Thats My App - Running in Your Background - Draining Your Battery
Thats My App - Running in Your Background - Draining Your Battery
Michael Galpin
Rxjs vienna
Rxjs vienna
Christoffer Noring
Lab2-DB-Mongodb
Lab2-DB-Mongodb
Lilia Sfaxi
Session06 handling xml data
Session06 handling xml data
kendyhuu
Data Processing with Cascading Java API on Apache Hadoop
Data Processing with Cascading Java API on Apache Hadoop
Hikmat Dhamee
Change tracking
Change tracking
Sonny56
Rxjs marble-testing
Rxjs marble-testing
Christoffer Noring
Functions & closures
Functions & closures
Knoldus Inc.
Bootstrap
Bootstrap
NexThoughts Technologies
Ian 2014.10.24 weekly report
Ian 2014.10.24 weekly report
LearningTech
Sanjar Akhmedov - Joining Infinity Windowless Stream Processing with Flink
Sanjar Akhmedov - Joining Infinity Windowless Stream Processing with Flink
Flink Forward
Talk KVO with rac by Philippe Converset
Talk KVO with rac by Philippe Converset
CocoaHeads France
Xml processing in scala
Xml processing in scala
Knoldus Inc.
Cassandra Community Webinar | Become a Super Modeler
Cassandra Community Webinar | Become a Super Modeler
DataStax
Building .NET Apps using Couchbase Lite
Building .NET Apps using Couchbase Lite
gramana
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
Accumulo Summit
Durable functions
Durable functions
覈 蟾
MongoDB - Aggregation Pipeline
MongoDB - Aggregation Pipeline
Jason Terpko
Map-Reduce and Apache Hadoop
Map-Reduce and Apache Hadoop
Svetlin Nakov
Thats My App - Running in Your Background - Draining Your Battery
Thats My App - Running in Your Background - Draining Your Battery
Michael Galpin
Lab2-DB-Mongodb
Lab2-DB-Mongodb
Lilia Sfaxi
Session06 handling xml data
Session06 handling xml data
kendyhuu
Data Processing with Cascading Java API on Apache Hadoop
Data Processing with Cascading Java API on Apache Hadoop
Hikmat Dhamee
Change tracking
Change tracking
Sonny56
Functions & closures
Functions & closures
Knoldus Inc.
Ian 2014.10.24 weekly report
Ian 2014.10.24 weekly report
LearningTech
Sanjar Akhmedov - Joining Infinity Windowless Stream Processing with Flink
Sanjar Akhmedov - Joining Infinity Windowless Stream Processing with Flink
Flink Forward
Talk KVO with rac by Philippe Converset
Talk KVO with rac by Philippe Converset
CocoaHeads France
Xml processing in scala
Xml processing in scala
Knoldus Inc.
Cassandra Community Webinar | Become a Super Modeler
Cassandra Community Webinar | Become a Super Modeler
DataStax
Building .NET Apps using Couchbase Lite
Building .NET Apps using Couchbase Lite
gramana
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
Accumulo Summit 2015: Reactive programming in Accumulo: The Observable WAL [I...
Accumulo Summit
Durable functions
Durable functions
覈 蟾
MongoDB - Aggregation Pipeline
MongoDB - Aggregation Pipeline
Jason Terpko

Viewers also liked (6)

Cloudfront private distribution 螳
Cloudfront private distribution 螳
覈豌 螳
Managementcontrol Brandweer Steller Andr辿 Maranus.2
Managementcontrol Brandweer Steller Andr辿 Maranus.2
awmaranus
Spring data iii
Spring data iii
覈豌 螳
Spring data
Spring data
覈豌 螳
Class loader basic
Class loader basic
覈豌 螳
The Outcome Economy
The Outcome Economy
Helge Tenn淡
Cloudfront private distribution 螳
Cloudfront private distribution 螳
覈豌 螳
Managementcontrol Brandweer Steller Andr辿 Maranus.2
Managementcontrol Brandweer Steller Andr辿 Maranus.2
awmaranus
Spring data iii
Spring data iii
覈豌 螳
Spring data
Spring data
覈豌 螳
Class loader basic
Class loader basic
覈豌 螳
The Outcome Economy
The Outcome Economy
Helge Tenn淡
Ad

Similar to Spring data ii (20)

Hadoop Integration in Cassandra
Hadoop Integration in Cassandra
Jairam Chandar
Sqlapi0.1
Sqlapi0.1
jitendral
Local data storage for mobile apps
Local data storage for mobile apps
Ivano Malavolta
An introduction to Test Driven Development on MapReduce
An introduction to Test Driven Development on MapReduce
Ananth PackkilDurai
Session 2- day 3
Session 2- day 3
Vivek Bhusal
9 Python programming notes for ktu physics and computer application semester 4
9 Python programming notes for ktu physics and computer application semester 4
ebindboby1
spring-tutorial
spring-tutorial
Arjun Shanka
Spark Cassandra Connector: Past, Present, and Future
Spark Cassandra Connector: Past, Present, and Future
Russell Spitzer
DataStax: Spark Cassandra Connector - Past, Present and Future
DataStax: Spark Cassandra Connector - Past, Present and Future
DataStax Academy
Accessing data with android cursors
Accessing data with android cursors
info_zybotech
Accessing data with android cursors
Accessing data with android cursors
info_zybotech
1 MVC Ajax and Modal Views AJAX stands for Asynch.docx
1 MVC Ajax and Modal Views AJAX stands for Asynch.docx
honey725342
Sql Summit Clr, Service Broker And Xml
Sql Summit Clr, Service Broker And Xml
David Truxall
Local storage in Web apps
Local storage in Web apps
Ivano Malavolta
ASP.NET Session 11 12
ASP.NET Session 11 12
Sisir Ghosh
Divide and Conquer Microservices with Node.js
Divide and Conquer Microservices with Node.js
Sebastian Springer
Mysqlppt
Mysqlppt
poornima sugumaran
ASP.Net 5 and C# 6
ASP.Net 5 and C# 6
Andy Butland
Mysqlppt
Mysqlppt
poornima sugumaran
Angular Schematics
Angular Schematics
Christoffer Noring
Hadoop Integration in Cassandra
Hadoop Integration in Cassandra
Jairam Chandar
Sqlapi0.1
Sqlapi0.1
jitendral
Local data storage for mobile apps
Local data storage for mobile apps
Ivano Malavolta
An introduction to Test Driven Development on MapReduce
An introduction to Test Driven Development on MapReduce
Ananth PackkilDurai
Session 2- day 3
Session 2- day 3
Vivek Bhusal
9 Python programming notes for ktu physics and computer application semester 4
9 Python programming notes for ktu physics and computer application semester 4
ebindboby1
spring-tutorial
spring-tutorial
Arjun Shanka
Spark Cassandra Connector: Past, Present, and Future
Spark Cassandra Connector: Past, Present, and Future
Russell Spitzer
DataStax: Spark Cassandra Connector - Past, Present and Future
DataStax: Spark Cassandra Connector - Past, Present and Future
DataStax Academy
Accessing data with android cursors
Accessing data with android cursors
info_zybotech
Accessing data with android cursors
Accessing data with android cursors
info_zybotech
1 MVC Ajax and Modal Views AJAX stands for Asynch.docx
1 MVC Ajax and Modal Views AJAX stands for Asynch.docx
honey725342
Sql Summit Clr, Service Broker And Xml
Sql Summit Clr, Service Broker And Xml
David Truxall
Local storage in Web apps
Local storage in Web apps
Ivano Malavolta
ASP.NET Session 11 12
ASP.NET Session 11 12
Sisir Ghosh
Divide and Conquer Microservices with Node.js
Divide and Conquer Microservices with Node.js
Sebastian Springer
ASP.Net 5 and C# 6
ASP.Net 5 and C# 6
Andy Butland
Ad

Spring data ii

  • 2. Redis Overview Redis is an extremely high-performance, lightweight data store. It provides key/value data access to persistent byte arrays, lists, sets, and hash data structures. It supports atomic counters and also has an efficient topic-based pub/sub messaging functionality. Redis is simple to install and run and is, above all, very, very fast at data access. What it lacks in complex querying functionality (like that found in Riak or MongoDB), it makes up for in speed and efficiency. Redis servers can also be clustered together to provide for very flexible deployment. Its easy to interact with Redis from the command line using the redis-cli binary that comes with the installation.
  • 3. ConnectionFactory @Configuration public class ApplicationConfig { private static final StringRedisSerializer STRING_SERIALIZER = new StringRedisSerializer(); @Bean public JedisConnectionFactory connectionFactory() { JedisConnectionFactory connectionFactory = new JedisConnectionFactory(); connectionFactory.setHostName("localhost"); connectionFactory.setPort(6379); return connectionFactory; } @Bean public RedisTemplate<String, Long> longTemplate() { RedisTemplate<String, Long> tmpl = new RedisTemplate<String, Long>(); tmpl.setConnectionFactory(connFac); tmpl.setKeySerializer(STRING_SERIALIZER); tmpl.setValueSerializer(LongSerializer.INSTANCE); return tmpl; } } Val. Key Type Type
  • 4. RedisTemplate Since the feature set of Redis is really too large to effectively encapsulate into a single class, the various operations on data are split up into separate Operations classes as follows ValueOperations ListOperations SetOperations ZSetOperations HashOperations BoundValueOperations BoundListOperations BoundSetOperations BoundZSetOperations BoundHashOperations
  • 5. Object Conversion Because Redis deals directly with byte arrays and doesnt natively perform Object to byte[] translation, the Spring Data Redis project provides some helper classes to make it easier to read and write data from Java code. By default, all keys and values are stored as serialized Java objects. public enum LongSerializer implements RedisSerializer<Long> { INSTANCE; @Override public byte[] serialize(Long aLong) throws SerializationException { if (null != aLong) { return aLong.toString().getBytes(); } else { return new byte[0]; } } @Override public Long deserialize(byte[] bytes) throws SerializationException { if (bytes.length > 0) { return Long.parseLong(new String(bytes)); } else { return null; } } }
  • 6. Automatic type conversion when setting and getting values public class ProductCountTracker { @Autowired RedisTemplate<String, Long> redis; public void updateTotalProductCount(Product p) { // Use a namespaced Redis key String productCountKey = "product-counts:" + p.getId(); // Get the helper for getting and setting values ValueOperations<String, Long> values = redis.opsForValue(); // Initialize the count if not present values.setIfAbsent(productCountKey, 0L); // Increment the value by 1 Long totalOfProductInAllCarts = values.increment(productCountKey, 1); } }
  • 7. Using the HashOperations interface private static final RedisSerializer<String> STRING_SERIALIZER = new StringRedisSerializer(); public void updateTotalProductCount(Product p) { RedisTemplate tmpl = new RedisTemplate(); tmpl.setConnectionFactory(connectionFactory); // Use the standard String serializer for all keys and values tmpl.setKeySerializer(STRING_SERIALIZER); tmpl.setHashKeySerializer(STRING_SERIALIZER); tmpl.setHashValueSerializer(STRING_SERIALIZER); HashOperations<String, String, String> hashOps = tmpl.opsForHash(); // Access the attributes for the Product String productAttrsKey = "products:attrs:" + p.getId(); Map<String, String> attrs = new HashMap<String, String>(); // Fill attributes attrs.put("name", "iPad"); attrs.put("deviceType", "tablet"); attrs.put("color", "black"); attrs.put("price", "499.00"); hashOps.putAll(productAttrsKey, attrs); }
  • 8. Using Atomic Counters public class CountTracker { @Autowired RedisConnectionFactory connectionFactory; public void updateProductCount(Product p) { // Use a namespaced Redis key String productCountKey = "product-counts:" + p.getId(); // Create a distributed counter. // Initialize it to zero if it doesn't yet exist RedisAtomicLong productCount = new RedisAtomicLong(productCountKey, connectionFactory, 0); // Increment the count Long newVal = productCount.incrementAndGet(); } }
  • 9. Pub/Sub Functionality Important benefit of using Redis is the simple and fast publish/subscribe functionality. Although it doesnt have the advanced features of a full-blown message broker, Redis pub/sub capability can be used to create a lightweight and flexible event bus. Spring Data Redis exposes a couple of helper classes that make working with this functionality extremely easy. Following the pattern of the JMS MessageListenerAdapter, Spring Data Redis has a MessageListenerAdapter abstraction that works in basically the same way @Bean public MessageListener dumpToConsoleListener() { return new MessageListener() { @Override public void onMessage(Message message, byte[] pattern) { System.out.println("FROM MESSAGE: " + new String(message.getBody())); } }; } @Bean MessageListenerAdapter beanMessageListener() { MessageListenerAdapter listener = new MessageListenerAdapter( new BeanMessageListener()); listener.setSerializer( new BeanMessageSerializer() ); return listener; } @Bean RedisMessageListenerContainer container() { RedisMessageListenerContainer container = new RedisMessageListenerContainer(); container.setConnectionFactory(redisConnectionFactory()); // Assign our BeanMessageListener to a specific channel container.addMessageListener(beanMessageListener(),new ChannelTopic("spring-data-book:pubsub-test:dump")); return container; }
  • 10. Springs Cache Abstraction with Redis Spring 3.1 introduced a common and reusable caching abstraction. This makes it easy to cache the results of method calls in your POJOs without having to explicitly manage the process of checking for the existence of a cache entry, loading new ones, and expiring old cache entries. Spring Data Redis supports this generic caching abstraction with the o.s.data.redis.cache.RedisCacheManager. To designate Redis as the backend for using the caching annotations in Spring, you just need to define a RedisCacheManager bean in your ApplicationContext. Then annotate your POJOs like you normally would, with @Cacheable on methods you want cached. @Configuration @EnableCaching public class CachingConfig extends ApplicationConfig { } @Bean public RedisCacheManager redisCacheManager() { RedisTemplate tmpl = new RedisTemplate(); tmpl.setConnectionFactory( redisConnectionFactory() ); tmpl.setKeySerializer( IntSerializer.INSTANCE ); tmpl.setValueSerializer( new JdkSerializationRedisSerializer() ); RedisCacheManager cacheMgr = new RedisCacheManager( tmpl ); return cacheMgr; } @Cacheable(value = "greetings") public String getCacheableValue() { long now = System.currentTimeMillis(); return "Hello World (@ " + now + ")!"; }