This document introduces Quarkus, an open source Java framework for building container-native microservices. Quarkus uses GraalVM to compile Java code ahead-of-time, resulting in applications that are up to 10x smaller and 100x faster to start than traditional Java applications. It is optimized for Kubernetes and serverless workloads. Quarkus achieves these benefits through ahead-of-time compilation using GraalVM, which analyzes code statically and removes unused classes and code to generate efficient native executables.
Watch this talk here: https://www.confluent.io/online-talks/how-apache-kafka-works-on-demand
Pick up best practices for developing applications that use Apache Kafka, beginning with a high level code overview for a basic producer and consumer. From there we’ll cover strategies for building powerful stream processing applications, including high availability through replication, data retention policies, producer design and producer guarantees.
We’ll delve into the details of delivery guarantees, including exactly-once semantics, partition strategies and consumer group rebalances. The talk will finish with a discussion of compacted topics, troubleshooting strategies and a security overview.
This session is part 3 of 4 in our Fundamentals for Apache Kafka series.
This document provides an overview of reactive programming concepts and technologies. It defines reactive programming as using asynchronous and non-blocking code to build responsive and resilient applications. It discusses reactive concepts like the event loop, back pressure, and overflow management. Frameworks like Vert.x and libraries like SmallRye Mutiny that support reactive programming on the JVM are also introduced. The key advantages of reactive programming are supporting more concurrent connections using fewer threads and efficiently processing asynchronous data streams.
The document introduces the HSA Intermediate Language (HSAIL) which allows for split compilation between a high-level compiler and a finalizer compiler targeting the specific hardware. HSAIL defines a virtual instruction set architecture that provides optimization opportunities for both compilers while allowing code to run across different machines. It aims to improve performance, portability, and time to market compared to compiling directly to native instruction sets.
"Extended" or "Stretched" Oracle RAC has been available as a concept for a while. Oracle RAC 12c Release 2 introduces an Oracle Extended Cluster configuration, in which the cluster understands the concept of sites and extended setups. This knowledge is used to more efficiently manage "Extended Oracle RAC", whether the nodes are 0.1 mile or 10 miles apart.
The presentation was last updated on August 7th 2017 to add a reference to the new MAA White Paper: "Installing Oracle Extended Clusters on Exadata Database Machine" - http://www.oracle.com/technetwork/database/availability/maa-extclusters-installguide-3748227.pdf and to correct some minor details.
Quarkus - a next-generation Kubernetes Native Java frameworkSVDevOps
?
For years, the client-server architecture has been the de-facto standard to build applications. But a major shift happened. The one model rules them all age is over. A new range of applications and architectures has emerged and impacts how code is written and how applications are deployed and executed. HTTP microservices, reactive applications, message-driven microservices, and serverless are now central players in modern systems.
Quarkus has been designed with this new world in mind and provides first-class support for these different paradigms. Developers using the Red Hat build of Quarkus can now choose between deploying natively compiled code or JVM-based code depending on an application’s needs. Natively compiled Quarkus applications are extremely fast and memory-efficient, making Quarkus a great choice for serverless and high-density cloud deployments.
Speakers
1) Shanna Chan, Senior Solutions Architect at Red Hat
2) Mark Baker, Senior Solutions Architect at Red Hat
Speaker Bios
Shanna Chan - Shanna is passionate about how open source solutions help others in their journey of application modernization and transformation of their business into cloud infrastructures. Her background includes application developments, DevOps, and architecting solutions for large enterprises. More about Shanna at http://linkedin.com/in/shanna-chan
Mark Baker - Mark’s experiences coalesce around solution /business architecture and leadership bringing together people in both post / pre-sales software projects bridging traditional legacy systems (i.e. Jakarta (JEE) MVC) with Cloud tolerant and Cloud native open source in the journey of modernization and transformation. More about Mark at http://linkedin.com/in/markwbaker-tsl
This document discusses using Docker containers on Oracle Exadata systems. It provides an overview of Docker and its key components. It then discusses using Docker for various use cases with Exadata, including hosting Oracle applications and database releases in containers for test and development. It also provides instructions for setting up an Oracle Database in a Docker container on Exadata, such as downloading the necessary files from GitHub, building the Docker image, and using DBCA to configure the database.
The document provides an overview of Domain Driven Design (DDD). It discusses that DDD is not a technology or methodology, but rather a set of principles and patterns for designing software focused around the domain. The key aspects of DDD are understanding the problem domain, creating an expressive model of the domain, and growing a ubiquitous language within the model. The document then discusses what constitutes a model and how it can be represented through diagrams, text descriptions, automated tests or code.
Spark is an open-source cluster computing framework that allows processing of large datasets in parallel. It supports multiple languages and provides advanced analytics capabilities. Spark SQL was built to overcome limitations of Apache Hive by running on Spark and providing a unified data access layer, SQL support, and better performance on medium and small datasets. Spark SQL uses DataFrames and a SQLContext to allow SQL queries on different data sources like JSON, Hive tables, and Parquet files. It provides a scalable architecture and integrates with Spark's RDD API.
In this session, we will discuss the architecture of a Kubernetes cluster. we will go through all the master and worker components of a kubernetes cluster. We will also discuss the basic terminology of Kubernetes cluster such as Pods, Deployments, Service etc. We will also cover networking inside Kuberneets. In the end, we will discuss options available for the setup of a Kubernetes cluster.
Oracle RAC on Extended Distance Clusters - PresentationMarkus Michalewicz
?
NOTE that a newer version of this presentation (covering Oracle RAC 12c Release) has been uploaded to my 狠狠撸Share: /MarkusMichalewicz/oracle-extended-clusters-for-oracle-rac
This presentation can be used as an illustration for some of the ideas and best practices discussed in the paper "Oracle RAC and Oracle RAC One Node on Extended Distance (Stretched) Clusters"
Standard Edition High Availability (SEHA) - The Why, What & HowMarkus Michalewicz
?
Standard Edition High Availability (SEHA) is the latest addition to Oracle’s high availability solutions. This presentation explains the motivation for Standard Edition High Availability, how it is implemented and the way it works currently as well as what is planned for future improvements. It was first presented during Oracle Groundbreakers Yatra (OGYatra) Online in July 2020.
In this session, Diógenes gives an introduction of the basic concepts that make OpenShift, giving special attention to its relationship with Linux containers and Kubernetes.
1. The document provides requirements and suggestions for hands-on development with Quarkus, including using Java 8 or 11 for just VM development, GraalVM 19.2.1 for native development, and ideas for projects like enabling favorite frameworks or following guides.
2. Ideas mentioned include getting started with Quarkus, following various guides, creating an ASCII banner from a PNG, and using Docker compose with Kafka.
3. Project length is estimated at 6-8 hours and developers are also encouraged to pursue their own ideas.
Oracle Exadata Management with Oracle Enterprise ManagerEnkitec
?
This document discusses Oracle Exadata management using Oracle Enterprise Manager. It provides an overview of the key capabilities including monitoring of databases, storage cells, and the full Exadata system. It describes how to discover Exadata targets within Enterprise Manager and ensure proper configuration. Troubleshooting tools are also covered to help diagnose any discovery or monitoring issues. The presentation aims to help customers get started with and take full advantage of Exadata management through Enterprise Manager.
The document provides an overview of Red Hat OpenShift Container Platform, including:
- OpenShift provides a fully automated Kubernetes container platform for any infrastructure.
- It offers integrated services like monitoring, logging, routing, and a container registry out of the box.
- The architecture runs everything in pods on worker nodes, with masters managing the control plane using Kubernetes APIs and OpenShift services.
- Key concepts include pods, services, routes, projects, configs and secrets that enable application deployment and management.
Apache Spark Streaming in K8s with ArgoCD & Spark OperatorDatabricks
?
Over the last year, we have been moving from a batch processing jobs setup with Airflow using EC2s to a powerful & scalable setup using Airflow & Spark in K8s.
The increasing need of moving forward with all the technology changes, the new community advances, and multidisciplinary teams, forced us to design a solution where we were able to run multiple Spark versions at the same time by avoiding duplicating infrastructure and simplifying its deployment, maintenance, and development.
This document discusses issues with running OpenStack in a multi-region mode and proposes Tricircle as a solution. It notes that in a multi-region OpenStack deployment, each region runs independently with separate instances of services like Nova, Cinder, Neutron, etc. Tricircle aims to integrate multiple OpenStack regions into a unified cloud by acting as a central API gateway and providing global views and replication of resources, tenants, and metering data across regions. It discusses how Tricircle could address issues around networking, quotas, resource utilization monitoring and more in a multi-region OpenStack deployment.
OpenShift Virtualization allows running virtual machines as containers managed by Kubernetes. It uses KVM with QEMU and libvirt to run virtual machines inside containers. Virtual machines are scheduled and managed like pods through Kubernetes APIs and can access container networking and storage. Templates can be used to simplify virtual machine creation and configuration. Virtual machines can be imported, viewed, managed, and deleted through the OpenShift console and CLI like other Kubernetes resources. Metrics on virtual machine resources usage are also collected.
This document discusses using Docker containers on Oracle Exadata systems. It provides an overview of Docker and its key components. It then discusses using Docker for various use cases with Exadata, including hosting Oracle applications and database releases in containers for test and development. It also provides instructions for setting up an Oracle Database in a Docker container on Exadata, such as downloading the necessary files from GitHub, building the Docker image, and using DBCA to configure the database.
The document provides an overview of Domain Driven Design (DDD). It discusses that DDD is not a technology or methodology, but rather a set of principles and patterns for designing software focused around the domain. The key aspects of DDD are understanding the problem domain, creating an expressive model of the domain, and growing a ubiquitous language within the model. The document then discusses what constitutes a model and how it can be represented through diagrams, text descriptions, automated tests or code.
Spark is an open-source cluster computing framework that allows processing of large datasets in parallel. It supports multiple languages and provides advanced analytics capabilities. Spark SQL was built to overcome limitations of Apache Hive by running on Spark and providing a unified data access layer, SQL support, and better performance on medium and small datasets. Spark SQL uses DataFrames and a SQLContext to allow SQL queries on different data sources like JSON, Hive tables, and Parquet files. It provides a scalable architecture and integrates with Spark's RDD API.
In this session, we will discuss the architecture of a Kubernetes cluster. we will go through all the master and worker components of a kubernetes cluster. We will also discuss the basic terminology of Kubernetes cluster such as Pods, Deployments, Service etc. We will also cover networking inside Kuberneets. In the end, we will discuss options available for the setup of a Kubernetes cluster.
Oracle RAC on Extended Distance Clusters - PresentationMarkus Michalewicz
?
NOTE that a newer version of this presentation (covering Oracle RAC 12c Release) has been uploaded to my 狠狠撸Share: /MarkusMichalewicz/oracle-extended-clusters-for-oracle-rac
This presentation can be used as an illustration for some of the ideas and best practices discussed in the paper "Oracle RAC and Oracle RAC One Node on Extended Distance (Stretched) Clusters"
Standard Edition High Availability (SEHA) - The Why, What & HowMarkus Michalewicz
?
Standard Edition High Availability (SEHA) is the latest addition to Oracle’s high availability solutions. This presentation explains the motivation for Standard Edition High Availability, how it is implemented and the way it works currently as well as what is planned for future improvements. It was first presented during Oracle Groundbreakers Yatra (OGYatra) Online in July 2020.
In this session, Diógenes gives an introduction of the basic concepts that make OpenShift, giving special attention to its relationship with Linux containers and Kubernetes.
1. The document provides requirements and suggestions for hands-on development with Quarkus, including using Java 8 or 11 for just VM development, GraalVM 19.2.1 for native development, and ideas for projects like enabling favorite frameworks or following guides.
2. Ideas mentioned include getting started with Quarkus, following various guides, creating an ASCII banner from a PNG, and using Docker compose with Kafka.
3. Project length is estimated at 6-8 hours and developers are also encouraged to pursue their own ideas.
Oracle Exadata Management with Oracle Enterprise ManagerEnkitec
?
This document discusses Oracle Exadata management using Oracle Enterprise Manager. It provides an overview of the key capabilities including monitoring of databases, storage cells, and the full Exadata system. It describes how to discover Exadata targets within Enterprise Manager and ensure proper configuration. Troubleshooting tools are also covered to help diagnose any discovery or monitoring issues. The presentation aims to help customers get started with and take full advantage of Exadata management through Enterprise Manager.
The document provides an overview of Red Hat OpenShift Container Platform, including:
- OpenShift provides a fully automated Kubernetes container platform for any infrastructure.
- It offers integrated services like monitoring, logging, routing, and a container registry out of the box.
- The architecture runs everything in pods on worker nodes, with masters managing the control plane using Kubernetes APIs and OpenShift services.
- Key concepts include pods, services, routes, projects, configs and secrets that enable application deployment and management.
Apache Spark Streaming in K8s with ArgoCD & Spark OperatorDatabricks
?
Over the last year, we have been moving from a batch processing jobs setup with Airflow using EC2s to a powerful & scalable setup using Airflow & Spark in K8s.
The increasing need of moving forward with all the technology changes, the new community advances, and multidisciplinary teams, forced us to design a solution where we were able to run multiple Spark versions at the same time by avoiding duplicating infrastructure and simplifying its deployment, maintenance, and development.
This document discusses issues with running OpenStack in a multi-region mode and proposes Tricircle as a solution. It notes that in a multi-region OpenStack deployment, each region runs independently with separate instances of services like Nova, Cinder, Neutron, etc. Tricircle aims to integrate multiple OpenStack regions into a unified cloud by acting as a central API gateway and providing global views and replication of resources, tenants, and metering data across regions. It discusses how Tricircle could address issues around networking, quotas, resource utilization monitoring and more in a multi-region OpenStack deployment.
OpenShift Virtualization allows running virtual machines as containers managed by Kubernetes. It uses KVM with QEMU and libvirt to run virtual machines inside containers. Virtual machines are scheduled and managed like pods through Kubernetes APIs and can access container networking and storage. Templates can be used to simplify virtual machine creation and configuration. Virtual machines can be imported, viewed, managed, and deleted through the OpenShift console and CLI like other Kubernetes resources. Metrics on virtual machine resources usage are also collected.