This document discusses starting a mobile app development company. It provides details on the company's founding in 2015, services offered such as mobile app development and information technology consulting. It also includes charts showing the company's growth, with revenue increasing from KRW 17,500 in January 2016 to KRW 62,000 by September 2017 as the number of employees grew from 2 to 15 over the same period. The document advocates that the company will continue achieving growth and success by focusing on customer satisfaction.
The document discusses deep learning paper reading roadmaps and lists several github repositories that aggregate deep learning papers. It also discusses developing mobile applications that utilize machine learning and the differences between developing for iOS versus Android. Lastly, it mentions continuing to learn through practice and experimentation with deep learning techniques.
The document discusses deep learning paper reading roadmaps and lists several github repositories that aggregate deep learning papers. It also discusses developing mobile applications that utilize machine learning and the differences between developing for iOS versus Android. Lastly, it mentions continuing to learn through practice and experimentation with deep learning techniques.
The document discusses consistent hashing, which is a technique for distributing data across multiple servers. It works by assigning each server and data item a unique hash value and storing each data item on the first server whose hash value comes after the data's hash value. This allows redistributing only a fraction of data when servers are added or removed. The key aspects are using a hash function to assign all items unique values and treating the hash ring as a circular space to determine data placement.
Integration between Filebeat and logstash DaeMyung Kang
油
Filebeat sends log files to Logstash. There are several cases described for integrating Filebeat and Logstash:
1) A simple configuration where one log file is sent from Filebeat to Logstash and output to one file.
2) Another simple configuration where multiple log files are sent from Filebeat to Logstash using a wildcard, and output to one file.
3) An advanced configuration where multiple log files are sent from Filebeat to Logstash, and Logstash outputs each file to a separate file based on the original file name using filtering.
4) A more advanced configuration where log files are sent from Filebeat to Logstash, Logstash parses the timestamp and uses it as the output
This document discusses Kafka timestamps and offsets. It explains that Kafka assigns timestamps to messages by default as the sending time from the client. The timestamps are stored in the timeindex file, which uses binary search to fetch logs by timestamp. When a log segment rolls, it is typically due to the segment size exceeding the max, the time since the oldest message exceeding the max, or the indexes becoming full. If a message is appended with an older timestamp than what is in the timeindex, it will overwrite the existing entries.
This document discusses how Kafka handles timestamps and offsets. It explains that Kafka maintains offset and time-based indexes to allow fetching log data by offset or timestamp. When new log records are appended, the indexes are updated with the largest offset and timestamp. If a record has a timestamp older than the existing minimum in the time index, Kafka will still append it but the time index entry will not be updated.
This document discusses Redis access control and the Redis ACL protocol version 1 (RCP1). It provides background on security issues with exposing Redis and Memcached servers publicly without authentication. RCP1 aims to address limitations of the existing requirepass authentication by defining user permissions through command groups and implementing access control using bit arrays. The presenter then demonstrates RCP1.