The document describes an IT solution blueprint for building efficient disaster recovery (DR) solutions using a cookie cutter approach. It outlines a DR solution built on VMware, NetBackup, and NEC servers/storage using virtualization. The solution is designed to meet tight RPO and RTO requirements of 5 minutes. It demonstrates failover and failback workflows to move operations from a production site to DR site and back. The blueprint approach aims to reuse proven architectural principles and building blocks to deliver more sophisticated, reliable solutions cost effectively.
Snowflake: The most cost-effective agile and scalable data warehouse ever!Visual_BI
油
In this webinar, the presenter will take you through the most revolutionary data warehouse, Snowflake with a live demo and technical and functional discussions with a customer. Ryan Goltz from Chesapeake Energy and Tristan Handy, creator of DBT Cloud and owner of Fishtown Analytics will also be joining the webinar.
Analyze key aspects to be considered before embarking on your cloud journey. The presentation outlines the strategies, approach, and choices that need to be made, to ensure a smooth transition to the cloud.
The document discusses options for moving Oracle E-Business Suite (EBS) workloads to Oracle Cloud. It addresses customer concerns about ongoing support for EBS and outlines business drivers for cloud adoption like reducing costs and improving insights. The document presents three paths to the cloud: 1) re-platforming EBS on Oracle Cloud Platform by lifting and shifting workloads, 2) extending on-premises EBS with additive SaaS applications, and 3) shifting specific EBS environments like development, testing, reporting or disaster recovery to the cloud. Oracle Cloud is positioned as providing benefits like centralized management, rapid provisioning and integration with Oracle infrastructure services.
Modernizing to a Cloud Data ArchitectureDatabricks
油
Organizations with on-premises Hadoop infrastructure are bogged down by system complexity, unscalable infrastructure, and the increasing burden on DevOps to manage legacy architectures. Costs and resource utilization continue to go up while innovation has flatlined. In this session, you will learn why, now more than ever, enterprises are looking for cloud alternatives to Hadoop and are migrating off of the architecture in large numbers. You will also learn how elastic compute models benefits help one customer scale their analytics and AI workloads and best practices from their experience on a successful migration of their data and workloads to the cloud.
This document provides an overview of CI/CD on Google Cloud Platform. It discusses key DevOps principles like treating infrastructure as code and automating processes. It then describes how GCP services like Cloud Build, Container Registry, Source Repositories, and Stackdriver can help achieve CI/CD. Spinnaker is mentioned as an open-source continuous delivery platform that integrates well with GCP. Overall the document outlines the benefits of CI/CD and how GCP makes CI/CD implementation easy and scalable.
How to Set Up a Cloud Cost Optimization Process for your EnterpriseRightScale
油
As cloud spend grows, enterprises need to set up internal processes to manage and optimize their cloud costs. This process will help organizations to accurately allocate and report on costs while minimizing wasted spend. In this webinar, experts from RightScales Cloud Cost Optimization team will share best practices in how to set up your own internal processes.
The document discusses Snowflake, a cloud data platform. It covers Snowflake's data landscape and benefits over legacy systems. It also describes how Snowflake can be deployed on AWS, Azure and GCP. Pricing is noted to vary by region but not cloud platform. The document outlines Snowflake's editions, architecture using a shared-nothing model, support for structured data, storage compression, and virtual warehouses that can autoscale. Security features like MFA and encryption are highlighted.
This document summarizes Navisite's cloud assessment services, which provide comprehensive guidance for customers migrating to the cloud. The assessment includes discovery of current infrastructure and applications, cloud readiness evaluation, optimization recommendations, migration planning, and cost analysis. The process involves automated data collection, interviews, analysis of application dependencies and performance, and deliverables such as architecture design, cost projections, and a phased migration roadmap. An example case study outlines how these services helped an airline reduce data centers and implement a scalable cloud solution.
Datadog: a Real-Time Metrics Database for One Quadrillion Points/DayC4Media
油
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2mAKgJi.
Ian Nowland and Joel Barciauskas talk about the challenges Datadog faces as the company has grown its real-time metrics systems that collect, process, and visualize data to the point they now handle trillions of points per day. They also talk about how the architecture has evolved, and what they are looking to in the future as they architect for a quadrillion points per day. Filmed at qconnewyork.com.
Ian Nowland is the VP Engineering Metrics and Alerting at Datadog. Joel Barciauskas currently leads Datadog's distribution metrics team, providing accurate, low latency percentile measures for customers across their infrastructure.
In this presentation we will discuss the planning considerations as well as some applicable methodologies and tools involved in the development and execution of a large AWS migration strategy.
This document outlines an agenda for a 90-minute workshop on Snowflake. The agenda includes introductions, an overview of Snowflake and data warehousing, demonstrations of how users utilize Snowflake, hands-on exercises loading sample data and running queries, and discussions of Snowflake architecture and capabilities. Real-world customer examples are also presented, such as a pharmacy building new applications on Snowflake and an education company using it to unify their data sources and achieve a 16x performance improvement.
This document provides an overview of the Microsoft Cloud Adoption Framework for Azure. It begins by explaining why cloud adoption is important, noting that 91% of organizations see digital transformation as critical to their business and that shifting to the cloud can provide significant cost savings and revenue benefits. It then introduces the Cloud Adoption Framework, which is an iterative process to help organizations define their cloud strategy, plan their adoption, prepare for change, adopt technologies by migrating or innovating, and govern and manage their cloud environment. Common blockers to cloud adoption are discussed along with the various tools, templates, and assessments available to help organizations overcome those blockers at each stage of the framework.
Databricks CEO Ali Ghodsi introduces Databricks Delta, a new data management system that combines the scale and cost-efficiency of a data lake, the performance and reliability of a data warehouse, and the low latency of streaming.
Cloud Migration Cookbook: A Guide To Moving Your Apps To The CloudNew Relic
油
The process of building new apps or migrating existing apps to a cloud-based platform is complex. There are hundreds of paths you can take and only a few will make sense for you and your business. Get a step-by-step guide on how to plan for a successful app migration.
Infrastructure as Code represents treating infrastructure components like software that can be version controlled, tested, and deployed. The document discusses tools and techniques for implementing Infrastructure as Code including using version control, continuous integration/delivery, configuration automation, and virtual labs for testing changes. It provides examples of workflows using these techniques and recommends starting small and evolving Infrastructure as Code practices over time.
Scaling and Modernizing Data Platform with DatabricksDatabricks
油
This document summarizes Atlassian's adoption of Databricks to manage their growing data pipelines and platforms. It discusses the challenges they faced with their previous architecture around development time, collaboration, and costs. With Databricks, Atlassian was able to build scalable data pipelines using notebooks and connectors, orchestrate workflows with Airflow, and provide self-service analytics and machine learning to teams while reducing infrastructure costs and data engineering dependencies. The key benefits included reduced development time by 30%, decreased infrastructure costs by 60%, and increased adoption of Databricks and self-service across teams.
The document discusses the AWS Cloud Adoption Framework (CAF) which provides guidance for organizations to develop a cloud adoption strategy and roadmap. The CAF includes 7 perspectives - People, Process, Security, Maturity, Platform, Operations, Business. It describes typical first steps such as skills assessment, foundational services setup, and application portfolio assessment. Key elements for a successful cloud adoption journey are also outlined such as executive sponsorship, experimentation principles, a cloud center of excellence, and an adoption roadmap aligned to business needs.
Building Reliable Lakehouses with Apache Flink and Delta LakeFlink Forward
油
Flink Forward San Francisco 2022.
Apache Flink and Delta Lake together allow you to build the foundation for your data lakehouses by ensuring the reliability of your concurrent streams from processing to the underlying cloud object-store. Together, the Flink/Delta Connector enables you to store data in Delta tables such that you harness Deltas reliability by providing ACID transactions and scalability while maintaining Flinks end-to-end exactly-once processing. This ensures that the data from Flink is written to Delta Tables in an idempotent manner such that even if the Flink pipeline is restarted from its checkpoint information, the pipeline will guarantee no data is lost or duplicated thus preserving the exactly-once semantics of Flink.
by
Scott Sandre & Denny Lee
Building Cloud-Native App Series - Part 3 of 11
Microservices Architecture Series
AWS Kinesis Data Streams
AWS Kinesis Firehose
AWS Kinesis Data Analytics
Apache Flink - Analytics
Architects Open-Source Guide for a Data Mesh ArchitectureDatabricks
油
Data Mesh is an innovative concept addressing many data challenges from an architectural, cultural, and organizational perspective. But is the world ready to implement Data Mesh?
In this session, we will review the importance of core Data Mesh principles, what they can offer, and when it is a good idea to try a Data Mesh architecture. We will discuss common challenges with implementation of Data Mesh systems and focus on the role of open-source projects for it. Projects like Apache Spark can play a key part in standardized infrastructure platform implementation of Data Mesh. We will examine the landscape of useful data engineering open-source projects to utilize in several areas of a Data Mesh system in practice, along with an architectural example. We will touch on what work (culture, tools, mindset) needs to be done to ensure Data Mesh is more accessible for engineers in the industry.
The audience will leave with a good understanding of the benefits of Data Mesh architecture, common challenges, and the role of Apache Spark and other open-source projects for its implementation in real systems.
This session is targeted for architects, decision-makers, data-engineers, and system designers.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
油
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session Ill cover all of them in detail and compare the pros and cons of each. Ill include use cases so you can see what approach will work best for your big data needs.
This document discusses various cloud migration strategies. It suggests starting with a partial approach by moving generic applications or non-critical infrastructure to the cloud as a first step. A full assessment of applications is needed to determine what can be retired, replaced with SaaS, refactored for PaaS, or initially rehosted on IaaS. It outlines a 5 step process for cloud migration including determining public vs private cloud, integration strategies, and transition architecture. The overall goal is to leverage the cloud platform to reduce costs and improve flexibility over time.
REA have taken an innovative approach to building strong financial management across their infrastructure team. The visibility and ownership of costs has been improved through modifying team structure, interactive finance processes, budgeting operations and improved cost management behaviours. Hear from both an operational and finance perspective the value delivered and lessons learnt by REA on this FinOps journey.
Speakers:
Katerina Martianova, Commercial Manager - IT, REA Group
Javier Turegano, Global Infrastructure and Architecture Manager, REA Group
This session provides an introduction to the AWS platform and services. It explains how you can get started on your cloud journey and what resources you can use build sophisticated applications with increased flexibility, scalability and reliability. The session also covers the benefits customers are enjoying by moving to AWS cloud; increased agility, faster decision making and the ability to fail fast and innovate.
Microsoft Data Platform - What's includedJames Serra
油
This document provides an overview of a speaker and their upcoming presentation on Microsoft's data platform. The speaker is a 30-year IT veteran who has worked in various roles including BI architect, developer, and consultant. Their presentation will cover collecting and managing data, transforming and analyzing data, and visualizing and making decisions from data. It will also discuss Microsoft's various product offerings for data warehousing and big data solutions.
Azure Cost Management is a native Azure service that helps you analyze costs, create and manage budgets, export data, and review and act on optimization recommendations to save money.
Microsoft Azure and Windows Application monitoringSite24x7
油
Monitor all your Microsoft applications and Azure services from a single console.
About Site24x7:
Site24x7 offers unified cloud monitoring for DevOps and IT operations. Monitor the experience of real users accessing websites and applications from desktop and mobile devices. In-depth monitoring capabilities enable DevOps teams to monitor and troubleshoot applications, servers and network infrastructure including private and public clouds. End user experience monitoring is done from 50+ locations across the world and various wireless carriers. For more information on Site24x7, please visit http://www.site24x7.com/.
Forums: https://forums.site24x7.com/
Facebook: http://www.facebook.com/Site24x7
Twitter: http://twitter.com/site24x7
Google+: https://plus.google.com/+Site24x7
LinkedIn: https://www.linkedin.com/company/site...
View Blogs: http://blogs.site24x7.com/
The document discusses various AWS services for monitoring, logging, and security. It provides examples of AWS CloudTrail logs and best practices for CloudTrail such as enabling in all regions, log file validation, encryption, and integration with CloudWatch Logs. It also summarizes VPC flow logs, CloudWatch metrics and logs, and tools for automating compliance like Config rules, CloudWatch events, and Inspector.
What is NetBackup appliance? Is it just NetBackup pre-installed on hardware?
The answer is both yes and no.
Yes, NetBackup appliance is simply backup in a box if you are looking for a solution for your data protection and disaster recovery readiness. That is the business problem you are solving with this turnkey appliance that installs in minutes and reduce your operational costs.
No, NetBackup appliance is more than a backup in box if you are comparing it with rolling your own hardware for NetBackup or if you are comparing it with third party deduplication appliances. Here is why I say this
NetBackup appliance comes with redundant storage in RAID6 for storing your backups
Symantec worked with Intel to design the hardware for running NetBackup optimally for predictable and consistent performance. Eliminates the guesswork while designing the solution.
Many vendors will talk about various processes running on their devices to perform integrity checks, some solutions even need blackout windows to do those operations. NetBackup appliances include Storage Foundation at no additional cost. The storage is managed by Veritas Volume Manager (VxVM) and presented to operating system through Veritas File System. Why is this important? SF is industry-leading storage management infrastructure that powers the most mission-critical applications in the enterprise space. It is built for high-performance and resiliency. NetBackup appliance provides 24/7 protection with data integrity on storage provided by the industry leading technology.
The Linux based operating system, optimized for NetBackup, harden by Symantec eliminates the cost of deploying and maintaining general purpose operating system and associated IT applications.
NetBackup appliances include built-on WAN Optimization driver. Replicate to appliances on remote sites or to the cloud up to 10 times faster on across high latency links.
Your backups need to be protected. Symantec Critical System Protection provides non-signature based Host Intrusion Prevention protection. It protects 油against zero-day attacks using granular OS hardening policies along with application, user and device controls, all pre-defined for you in NetBackup appliance so that you dont need to worry about configuring it.
Best of all, reduce your operational expenditure and eliminate complexity! One patch updates everything in this stack! The most holistic data protection solution with the least number of knobs to operate.
Datadog: a Real-Time Metrics Database for One Quadrillion Points/DayC4Media
油
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2mAKgJi.
Ian Nowland and Joel Barciauskas talk about the challenges Datadog faces as the company has grown its real-time metrics systems that collect, process, and visualize data to the point they now handle trillions of points per day. They also talk about how the architecture has evolved, and what they are looking to in the future as they architect for a quadrillion points per day. Filmed at qconnewyork.com.
Ian Nowland is the VP Engineering Metrics and Alerting at Datadog. Joel Barciauskas currently leads Datadog's distribution metrics team, providing accurate, low latency percentile measures for customers across their infrastructure.
In this presentation we will discuss the planning considerations as well as some applicable methodologies and tools involved in the development and execution of a large AWS migration strategy.
This document outlines an agenda for a 90-minute workshop on Snowflake. The agenda includes introductions, an overview of Snowflake and data warehousing, demonstrations of how users utilize Snowflake, hands-on exercises loading sample data and running queries, and discussions of Snowflake architecture and capabilities. Real-world customer examples are also presented, such as a pharmacy building new applications on Snowflake and an education company using it to unify their data sources and achieve a 16x performance improvement.
This document provides an overview of the Microsoft Cloud Adoption Framework for Azure. It begins by explaining why cloud adoption is important, noting that 91% of organizations see digital transformation as critical to their business and that shifting to the cloud can provide significant cost savings and revenue benefits. It then introduces the Cloud Adoption Framework, which is an iterative process to help organizations define their cloud strategy, plan their adoption, prepare for change, adopt technologies by migrating or innovating, and govern and manage their cloud environment. Common blockers to cloud adoption are discussed along with the various tools, templates, and assessments available to help organizations overcome those blockers at each stage of the framework.
Databricks CEO Ali Ghodsi introduces Databricks Delta, a new data management system that combines the scale and cost-efficiency of a data lake, the performance and reliability of a data warehouse, and the low latency of streaming.
Cloud Migration Cookbook: A Guide To Moving Your Apps To The CloudNew Relic
油
The process of building new apps or migrating existing apps to a cloud-based platform is complex. There are hundreds of paths you can take and only a few will make sense for you and your business. Get a step-by-step guide on how to plan for a successful app migration.
Infrastructure as Code represents treating infrastructure components like software that can be version controlled, tested, and deployed. The document discusses tools and techniques for implementing Infrastructure as Code including using version control, continuous integration/delivery, configuration automation, and virtual labs for testing changes. It provides examples of workflows using these techniques and recommends starting small and evolving Infrastructure as Code practices over time.
Scaling and Modernizing Data Platform with DatabricksDatabricks
油
This document summarizes Atlassian's adoption of Databricks to manage their growing data pipelines and platforms. It discusses the challenges they faced with their previous architecture around development time, collaboration, and costs. With Databricks, Atlassian was able to build scalable data pipelines using notebooks and connectors, orchestrate workflows with Airflow, and provide self-service analytics and machine learning to teams while reducing infrastructure costs and data engineering dependencies. The key benefits included reduced development time by 30%, decreased infrastructure costs by 60%, and increased adoption of Databricks and self-service across teams.
The document discusses the AWS Cloud Adoption Framework (CAF) which provides guidance for organizations to develop a cloud adoption strategy and roadmap. The CAF includes 7 perspectives - People, Process, Security, Maturity, Platform, Operations, Business. It describes typical first steps such as skills assessment, foundational services setup, and application portfolio assessment. Key elements for a successful cloud adoption journey are also outlined such as executive sponsorship, experimentation principles, a cloud center of excellence, and an adoption roadmap aligned to business needs.
Building Reliable Lakehouses with Apache Flink and Delta LakeFlink Forward
油
Flink Forward San Francisco 2022.
Apache Flink and Delta Lake together allow you to build the foundation for your data lakehouses by ensuring the reliability of your concurrent streams from processing to the underlying cloud object-store. Together, the Flink/Delta Connector enables you to store data in Delta tables such that you harness Deltas reliability by providing ACID transactions and scalability while maintaining Flinks end-to-end exactly-once processing. This ensures that the data from Flink is written to Delta Tables in an idempotent manner such that even if the Flink pipeline is restarted from its checkpoint information, the pipeline will guarantee no data is lost or duplicated thus preserving the exactly-once semantics of Flink.
by
Scott Sandre & Denny Lee
Building Cloud-Native App Series - Part 3 of 11
Microservices Architecture Series
AWS Kinesis Data Streams
AWS Kinesis Firehose
AWS Kinesis Data Analytics
Apache Flink - Analytics
Architects Open-Source Guide for a Data Mesh ArchitectureDatabricks
油
Data Mesh is an innovative concept addressing many data challenges from an architectural, cultural, and organizational perspective. But is the world ready to implement Data Mesh?
In this session, we will review the importance of core Data Mesh principles, what they can offer, and when it is a good idea to try a Data Mesh architecture. We will discuss common challenges with implementation of Data Mesh systems and focus on the role of open-source projects for it. Projects like Apache Spark can play a key part in standardized infrastructure platform implementation of Data Mesh. We will examine the landscape of useful data engineering open-source projects to utilize in several areas of a Data Mesh system in practice, along with an architectural example. We will touch on what work (culture, tools, mindset) needs to be done to ensure Data Mesh is more accessible for engineers in the industry.
The audience will leave with a good understanding of the benefits of Data Mesh architecture, common challenges, and the role of Apache Spark and other open-source projects for its implementation in real systems.
This session is targeted for architects, decision-makers, data-engineers, and system designers.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
油
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session Ill cover all of them in detail and compare the pros and cons of each. Ill include use cases so you can see what approach will work best for your big data needs.
This document discusses various cloud migration strategies. It suggests starting with a partial approach by moving generic applications or non-critical infrastructure to the cloud as a first step. A full assessment of applications is needed to determine what can be retired, replaced with SaaS, refactored for PaaS, or initially rehosted on IaaS. It outlines a 5 step process for cloud migration including determining public vs private cloud, integration strategies, and transition architecture. The overall goal is to leverage the cloud platform to reduce costs and improve flexibility over time.
REA have taken an innovative approach to building strong financial management across their infrastructure team. The visibility and ownership of costs has been improved through modifying team structure, interactive finance processes, budgeting operations and improved cost management behaviours. Hear from both an operational and finance perspective the value delivered and lessons learnt by REA on this FinOps journey.
Speakers:
Katerina Martianova, Commercial Manager - IT, REA Group
Javier Turegano, Global Infrastructure and Architecture Manager, REA Group
This session provides an introduction to the AWS platform and services. It explains how you can get started on your cloud journey and what resources you can use build sophisticated applications with increased flexibility, scalability and reliability. The session also covers the benefits customers are enjoying by moving to AWS cloud; increased agility, faster decision making and the ability to fail fast and innovate.
Microsoft Data Platform - What's includedJames Serra
油
This document provides an overview of a speaker and their upcoming presentation on Microsoft's data platform. The speaker is a 30-year IT veteran who has worked in various roles including BI architect, developer, and consultant. Their presentation will cover collecting and managing data, transforming and analyzing data, and visualizing and making decisions from data. It will also discuss Microsoft's various product offerings for data warehousing and big data solutions.
Azure Cost Management is a native Azure service that helps you analyze costs, create and manage budgets, export data, and review and act on optimization recommendations to save money.
Microsoft Azure and Windows Application monitoringSite24x7
油
Monitor all your Microsoft applications and Azure services from a single console.
About Site24x7:
Site24x7 offers unified cloud monitoring for DevOps and IT operations. Monitor the experience of real users accessing websites and applications from desktop and mobile devices. In-depth monitoring capabilities enable DevOps teams to monitor and troubleshoot applications, servers and network infrastructure including private and public clouds. End user experience monitoring is done from 50+ locations across the world and various wireless carriers. For more information on Site24x7, please visit http://www.site24x7.com/.
Forums: https://forums.site24x7.com/
Facebook: http://www.facebook.com/Site24x7
Twitter: http://twitter.com/site24x7
Google+: https://plus.google.com/+Site24x7
LinkedIn: https://www.linkedin.com/company/site...
View Blogs: http://blogs.site24x7.com/
The document discusses various AWS services for monitoring, logging, and security. It provides examples of AWS CloudTrail logs and best practices for CloudTrail such as enabling in all regions, log file validation, encryption, and integration with CloudWatch Logs. It also summarizes VPC flow logs, CloudWatch metrics and logs, and tools for automating compliance like Config rules, CloudWatch events, and Inspector.
What is NetBackup appliance? Is it just NetBackup pre-installed on hardware?
The answer is both yes and no.
Yes, NetBackup appliance is simply backup in a box if you are looking for a solution for your data protection and disaster recovery readiness. That is the business problem you are solving with this turnkey appliance that installs in minutes and reduce your operational costs.
No, NetBackup appliance is more than a backup in box if you are comparing it with rolling your own hardware for NetBackup or if you are comparing it with third party deduplication appliances. Here is why I say this
NetBackup appliance comes with redundant storage in RAID6 for storing your backups
Symantec worked with Intel to design the hardware for running NetBackup optimally for predictable and consistent performance. Eliminates the guesswork while designing the solution.
Many vendors will talk about various processes running on their devices to perform integrity checks, some solutions even need blackout windows to do those operations. NetBackup appliances include Storage Foundation at no additional cost. The storage is managed by Veritas Volume Manager (VxVM) and presented to operating system through Veritas File System. Why is this important? SF is industry-leading storage management infrastructure that powers the most mission-critical applications in the enterprise space. It is built for high-performance and resiliency. NetBackup appliance provides 24/7 protection with data integrity on storage provided by the industry leading technology.
The Linux based operating system, optimized for NetBackup, harden by Symantec eliminates the cost of deploying and maintaining general purpose operating system and associated IT applications.
NetBackup appliances include built-on WAN Optimization driver. Replicate to appliances on remote sites or to the cloud up to 10 times faster on across high latency links.
Your backups need to be protected. Symantec Critical System Protection provides non-signature based Host Intrusion Prevention protection. It protects 油against zero-day attacks using granular OS hardening policies along with application, user and device controls, all pre-defined for you in NetBackup appliance so that you dont need to worry about configuring it.
Best of all, reduce your operational expenditure and eliminate complexity! One patch updates everything in this stack! The most holistic data protection solution with the least number of knobs to operate.
Cloud-Native Patterns and the Benefits of MySQL as a Platform Managed ServiceVMware Tanzu
油
You cant have cloud-native applications without a modern approach to databases and backing services. Data professionals are looking for ways to transform how databases are provisioned and managed.
In this webinar, well cover practical strategies you can employ to deliver improved business agility at the data layer. Well discuss the impact that microservices are having in the enterprise, and what this means for MySQL and other popular databases. Join us and learn the answers to these common questions:
How can you meet the operational challenge of scaling the number of MySQL database instances and managing the fleet?
Adding to this scale challenge, how can your MySQL instances maintain availability in a world where the underlying IT infrastructure is ephemeral?
How can you secure data in motion?
How can you enable self-service while maintaining control and governance?
Well cover these topics and share how enterprises like yours are delivering greater outcomes with our Pivotal Platform managed MySQL.
Now you can scale without fear of failure.
Presenters:
Judy Wang, Product Management
Jagdish Mirani, Product Marketing
Towards the Cloud: Architecture Patterns and VDI StoryIT Expert Club
油
VDI architecture patterns aim to address three main problems: high traffic between system components, data inconsistency issues, and poor user experience. Event sourcing with cache-aside patterns and health endpoint monitoring can reduce duplicate requests. Retry, circuit breaker, and compensating transactions patterns add fault tolerance to address data inconsistency from errors. Improving storage performance and network optimizations further enhance the user experience of virtual desktop infrastructure deployments.
Veeam Webinar - Case study: building bi-directional DRJoep Piscaer
油
This document outlines a case study for building bidirectional disaster recovery (DR) between two virtualized infrastructures located on separate sites. The project goals were to reduce recovery time objectives (RTO) from weeks to hours, reduce recovery point objectives (RPO) from infinite to a day, and implement a DR solution using Veeam software. The solution involved using Veeam's distributed backup architecture with proxies and repositories on each site to back up VMs locally and to the remote site. Reverse incremental backups were used to minimize storage usage. A live demo was presented to showcase the solution.
Resource replication in cloud computing is the process of making multiple copies of the same resource. It's done to improve the availability and performance of IT resources.
If you need to build highly performant, mission critical ,microservice-based system following DevOps best practices, you should definitely check Service Fabric!
Service Fabric is one of the most interesting services Azure offers today. It provide unique capabilities outperforming competitor products.
We are seeing global companies start to use Service Fabric for their mission critical solutions.
In this talk we explore the current state of Service Fabric and dive deeper to highlight best practices and design patterns.
We will cover the following topics:
Service Fabric Core Concepts
Cluster Planning and Management
Stateless Services
Stateful Services
Actor Model
Availability and reliability
Scalability and perfromance
Diganostics and Monitoring
Containers
Testing
IoT
Live broadcast on https://www.youtube.com/watch?v=Zuxfhpab6xo
Optimize DR and Cloning with Logical Hostnames in Oracle E-Business Suite (OA...Andrejs Prokopjevs
油
This presentation covers the idea of logical hostname feature and its possible use case with E-Business Suite, why it is a must-have configuration for DR, how it can improve your test/dev instance cloning and lifecycle processes, especially in a cloud deployment, support overview by 11i/R12.0/R12.1, and why it is a very hot topic right now for R12.2. Additionally, we will describe possible advanced configuration scenarios like container based virtualization. The content is based on real client environment implementation experience.
This document summarizes concepts related to disaster recovery including objectives, concepts, targets, risks, opportunities, and solutions. The objectives are to review disaster recovery basics, explore customer business risks, and discover opportunities through awareness services and technology. Concepts discussed include disaster recovery, business continuity, availability, recovery time objectives, and costs of downtime. Target audiences are those with mission critical applications. Business risks include various physical events, user errors, hardware/software failures, and security threats. Opportunities discussed include Ricoh consulting and managed services as well as partner solutions for networking, hosting, storage, and data protection. Specific disaster recovery deep dives focus on solutions from SonicWALL, Dell, and IBM that can be combined
Patterns and Pains of Migrating Legacy Applications to KubernetesQAware GmbH
油
Open Source Summit 2018, Vancouver (Canada): Talk by Josef Adersberger (@adersberger, CTO at QAware), Michael Frank (Software Architect at QAware) and Robert Bichler (IT Project Manager at Allianz Germany)
Abstract:
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud-native apps. But what to do if youve no shiny new cloud-native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
Were facing the challenge of migrating hundreds of JEE legacy applications of a German blue chip company onto a Kubernetes cluster within one year.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way.
Patterns and Pains of Migrating Legacy Applications to KubernetesJosef Adersberger
油
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs, and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud native apps. But what to do if youve no shiny new cloud native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
Were facing the challenge of migrating hundreds of JEE legacy applications of a German blue chip company onto a Kubernetes cluster within one year.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way.
A scalable server environment for your applicationsGigaSpaces
油
This document discusses building applications for the cloud and provides best practices. It notes that deploying applications on the cloud introduces challenges related to scalability, reliability, security, and management. It recommends that applications be designed to be elastic, memory-based, and easy to operate in order to fully take advantage of the cloud. Specific steps are outlined, such as using in-memory data grids for messaging and as the system of record, and auto-scaling the web tier.
The document outlines an agenda for a Dell presentation on data protection and performance management solutions. The agenda includes introductions of vRanger, NetVault, AppAssure, and vFoglight products. A Q&A session and networking reception will follow the presentations. Backup solutions discussed include image-level VM backup, deduplication, replication to the cloud, and recovery options. Key benefits highlighted are reduced complexity, improved performance and scalability, and lower costs.
Varrow Q4 Lunch & Learn Presentation - Virtualizing Business Critical Applica...Andrew Miller
油
This document provides a summary of a presentation on virtualizing tier one applications. The presentation covered the top 10 myths about virtualizing business critical applications and provided best practices for virtualizing mission critical applications. It also discussed real world tools for monitoring virtualized environments like Confio IgniteVM and vCenter Operations. The presentation aimed to show that virtualizing tier one applications is possible and discussed strategies for virtualizing SQL Server and Microsoft Exchange environments.
The document discusses disaster recovery strategies and how virtualization can help bridge the gaps in traditional approaches. It outlines the need for disaster recovery due to common disruptions. Traditional methods either focus on fast recovery times through duplication which is costly, or backups which are cheaper but have slow recovery. Virtualization allows consolidating workloads on virtual hosts for reduced costs while providing faster recovery times than backups. The document also highlights case studies of customers benefiting from PlateSpin products in achieving disaster recovery goals.
Backup Exec Blueprints: How to Use
Getting the most out of Backup Exec blueprints
These Blueprints are designed to show customer challenges and how Backup
Exec solves these challenges.
Each Blueprint consists of:
Pain Points: What challenges customers face
Whiteboard: Shows how Backup Exec solves the customer challenges
Recommended Configuration: Shows recommended installation
Dos: Gives detailed configurations suggested by Symantec
Don'ts: What configurations & pitfalls customers should avoid
Advantages: Summarizes the Backup Exec advantages
Use these Blueprints to:
Understand the customer challenges and how Backup Exec solves them
Present the Backup Exec best practice solution
Typical disaster recovery plans leverage backup and/or replication to move data out of the primary data center and to a secondary site. Historically, the secondary site is another data center that the organization maintains. But now, companies are looking to the cloud to become a secondary site, leveraging it as a backup target and even a place to start their applications in the event of a failure. The problem with this approach is that it merely simulates a legacy design and presents some significant recovery challenges.
This document discusses resource management in cloud computing and strategies for improving energy efficiency. It describes different resource types, including physical and logical resources. It then discusses how resource management controls access to cloud capabilities. The document outlines how data center power consumption is growing rapidly and motivating the need for green computing approaches. These include power-aware and thermal-aware scheduling of virtual machines, optimized data center design, and minimizing the size of virtual machine images to reduce energy usage. The overall summary advocates an integrated green cloud framework combining various efficiency techniques.
Marketing Automation at Scale: How Marketo Solved Key Data Management Challen...Continuent
油
Marketo uses Continuent Tungsten to solve key data management challenges at scale. Tungsten provides high availability, online maintenance, and parallel replication to allow Marketo to process over 600 million MySQL transactions per day across more than 7TB of data without downtime. Tungsten's innovative caching and sharding techniques help replicas keep up with Marketo's high transaction volumes and uneven tenant sizes. The solution has enabled fast failover, rolling maintenance, and scaling to thousands of customers.
2. Todays Itinerary
A Blue Print Primer
A Blue Print Based Disaster Recovery Solution
Our Demo Configuration
Failure, Data Loss: Fail Over
Repair, Planned Fail-Back
Analysis of the Solution and the Blue Print
4. Its The Data, Stupid!!
60% of companies that lose their data go out
of business within 6 months!
93% of companies that lose the data center
for 10 days or more file for bankruptcy within
one year! (NA&RA)
Conclusion: The two critical components of DR
are:
Protecting the data, and
To be able to resume quickly after disasters and
losses!
5. Cisco and Blueprints
2002: Cisco campus on Tasman Drive:
Every building was identical!
Identical blue print for all infrastructure:
Water, Sewer, Power, Networking, Fire safety, ..
Unifies all maintenance, management, upgrades, ..
The entire campus was built much faster and
more efficiently by cookie cutter!
Populating the campus was done in a hurry.
Everything looked the same.
Buildings were reusable as needs changed.
6. Our Blue Print Objective
Achieve the same level of efficiency with IT
solutions as Cisco did with their campus blue
print!
Reuse of architecture and modules to assure
rapid an efficient scalability and unified
management framework.
Enable rapid creation of new solutions within the
same architectural framework.
Enable easier capacity scaling without affecting IT
management.
8. DR/DP Solution Blue Print TOC:
The Platform:
The Physical: Sites, hardware, network
Virtualization: Server, storage, network
The Applications to protect
The Solution Engine: Backup software
Use cases that any DR solution must support
Normalized work flows implementing the use
cases.
Questionnaire to assist adaption and
customization
9. Blue Print Template: The Architectural
Principles
Extensive use of virtualization in servers, storage and
networking.
Integrating layers of the most cost effective products.
Creating a building blocks approach to solutions
design and implementation.
Creating designs that can easily be adapted, extended
and customized to meet specific requirements.
Unified architecture to simplify management of assets
and decision points.
The results are sophisticated and complex, but
integrated and efficient business solutions
10. Blue Print Summary
Virtualization + sound architectural principles
enables a more effective approach to building
and implementing solutions.
Customers gets more sophisticated and more
reliable solutions per $$.
Resellers and integrators can reuse the building
blocks and handle more customers in a shorter
amount of time
Reuse of building blocks implies simplified and
uniform management.
12. Our DR Solution
We have a business critical application.
We want to protect it with a DR solution so
that in the case of loss of disk, server, site etc.,
we can rapidly fail over to the DR instance.
Our platform consists of VMware, NetBackup
and Windows on top of NECs servers, storage
and network.
13. Objectives and Requirements
RPO: How much data/time are we willing to lose?
I.e. scheduled updates every RPO/2.
RTO: How soon must we operational?
This is total time recovery time. It includes:
The time to make the fail-over decision;
The time required to failover and start DR instance once the
decision has been made.
DR or DP?
DR: DR site can support all critical applications.
DP: Only the data is protected. Repair of and recovery
is to the production site?
Short RTO => DR standby capability!
14. RPO and RTO
RPO: Predominantly a WAN bandwidth issue.
WAN bandwidth is expensive!!
More frequent snapshots -> higher data change volume/hr.
Note that snapshots are inexpensive, they do not affect
application performance.
RTO: Total Fail-Over Time:
When the decision is made, how fast to operational
applications at DR site?
Cheat!! By updating your DR instances after every
completed update!
How reliable is it!
Test often!! Or even better, perform planned fail-over/back
regularly!!
15. NICHBA
ESX Hypervisor
NEC FT-Gemini
Guest
App
Guest Guest
Ap
p
Ap
p
Guest
VM1 VM2 VM3 VMn
LUN
App
Array:
FC/iSCSI SAN
LUN
SAN: NEC
D3/D4 array
with FC or
iSCSI connect:
NEC 5800
FT Server
running
VMware
VMs with
applications:
Array:
HBA
TCP/IP
Network
NIC
App
17. Required Use Cases
Unplanned and planned fail-over to the DR
instance.
Planned: for maintenance or for testing.
Planned and unplanned (!!) fail-back.
Unplanned when the DR failover is aborted.
Recovery of lost files or folders back to the
production instance.
This may include the entire disk/file system.
The backup software (NetBackup )provides this
capability all by itself.
18. In Your World
What is the typical range for RPO and RTO in
the DR solutions you build?
How do they vary across types or sizes of
companies?
Other requirements you see that challenge
the budget, your efficiency as designers or
implementers, etc?
What is the % split between full DR and DP
only solutions? Is it changing?
20. Our Demo Solution
A VM with a file system holding user data.
For demo purposes our objectives are tight.
RPO: 5 minutes:
I.e. scheduled updates every 2.5 minutes.
RTO: 5 minutes:
Total time recovery time. Includes the time to make
the fail-over decision and the time required to start DR
instance once decision to fail over has been made.
25. Fail-Over: Should/shouldnt
If you fail over you WILL lose some data.
If you can resume in less than RPO time, net
win!
If you can wait it out for some time, a bit
beyond the RTO, low risk and convenience.
Otherwise well push the button and start the
fail-over work flow
27. Work Flow for UC-DR-2:
1. Shut down any remaining production side VMs that
are part of the application.
This prevents data corruption and network configuration
errors.
2. In parallel with step 3, begin performing any required
network reconfiguration required for all clients to
reach the DR solution instance. Save the last step that
enables client access.
3. Start the DR instances of the component applications
in the prescribed order to bring the business
application on-line. Verify data/operational integrity.
4. Perform the last network configuration steps to allow
clients to connect to the DR instance of the
application.
28. Work Flow Verification
Verify with PC that we have completed the
fail-over.
Connect to DR instance with PC
Verify that we did lose some data, changes
that occurred after the last complete update.
30. The Fail-Back Decision
Have we repaired the production instance?
Have we tested the new production instance?
What is the optimal time for a planned fail-
back?
Does the organization have a time period that is
more convenient for application shutdown?
32. Planned Fail-Back: UC-DR-5
1. Shut down the DR instance of the application, running at the DR
site.
2. Start reconfiguring the network an all other infrastructure
components for steering clients to the production site instance of
the application. Note that for steps 3 and 4, which are described
below, they can be carried out in parallel with this step.
3. Perform the last NetBackup backup cycle for the LUNs in the DR
site to the backup archive.
4. Perform a restore from the backup archive to the LUNs in the
production site.
5. When steps 2 - 4 have all completed, start up the restored
production instance of the application. Clients will now have full
service.
6. Restart NetBackup protection of the production isntance to the
backup archive at the DR site.
33. Verify restored service ..
Verify that the new production instance is
accessible:
Connect to the repaired VM/FS/LUN and verify
that service from the production instance have
been restored.
Verify that all data set changes made at the
DR instance has been transferred to the
production instance.
34. Discussion Topics
How can we shorten the application
downtime?
Iterative recovery: (Level 0 +) Incremental
backups to the repaired instance shortens the
last update.
Large data set, limited WAN bandwidth: Out-of-
band restore.
Testing the new instance:
Start it up and access data using the next to last
incremental above.
36. Blue Prints: Why A Better Result
Using tested architectural principles and building
blocks that are common to many solutions.
The solution is based on a well proven architecture.
The building blocks have been road tested with other
solutions and customers.
The same BP enables easier integration across new
applications:
Easy to protect new applications by simply expanding the
existing DR solution
The resulting solution is easily integrated into the
management framework.
37. Blue Prints: Efficiency!
Starting out with an 80% complete design, and a well
understood set of building blocks.
The same BP enables easier integration across new
applications:
Easy to protect new applications by simply expanding the
existing DR solution
Adaptation of the blue print to the customers specific
environments and requirements are guided by the
questions in the questionnaire
These variations do not break with the blue print, the principles
are still intact
Scalability is built in at the application, solution and
infrastructure level.
38. Meeting The Requirements?
Does the solution meet our requirements?
How many (%) of the DR solutions you have
built or managed have tighter requirements?
Or looser requirements?
Should any of that be reflected in the blue print?
39. Improving the Solution
What questions do we ask, what data do we
collect, what experience do we need to take
this solution to the next iteration?
40. Improvements, Savings
How can we improve the solution for the
customer?
Can critical applications (customer db, ..) run on a
different schedule than the common set (email,
home dir)?
How much to pay for the next amount of WAN
bw?
How can we save $$s for the customer?
Reducing the DR foot print?