- The document discusses the process of forking and creating new processes in an operating system. It describes the key steps like allocating memory for the child process, copying resources from the parent, and starting the new process.
- Code examples are provided to demonstrate how fork is implemented at the system call level and how it is used in C programs to create new threads.
- The document also explains the data structures and functions involved in process switching and context switching between threads.
TR-069, also known as the CPE WAN Management Protocol (CWMP), defines a protocol for remote management of customer-premises equipment connected to an IP network. It allows broadband service providers to remotely configure, install, diagnose, and maintain home and business networking devices. Key aspects of TR-069 include periodic connectivity checks, remote device management via RPC calls, and the ability to initiate sessions through connection requests from the Auto-Configuration Server to the customer-premises equipment.
Automating for Monitoring and Troubleshooting your Cisco IOS NetworkCisco Canada
?
Do you wish that you could provide more automatic methods to monitor your network? Have you ever wasted hours to capture evidence of a transient network issue? Do you know which part of your network is likely to fail next? And how to prevent it? Your Cisco IOS? Network provides a wealth of advanced device manageability instrumentation (DMI) and Embedded Automation Systems (EASy) to design and implement your own Network Automations. Learn how Network Automation allows you to automate manual tasks, better operate existing network services and even enable new and innovative networking solutions. This session uncovers embedded Network Automation capabilities you can use to interact with your network elements for the purpose of implementing network testing, verification and service assurance in a more effective, efficient and robust way. Network Automation fundamentals as well as the choice and use of appropriate practices are illustrated through a combination of presentation and best practice examples. The topic is relevant for network planners and administrators, engineers and system integrators for both enterprises and service providers.
XPDDS17: Reworking the ARM GIC Emulation & Xen Challenges in the ARM ITS Emu...The Linux Foundation
?
Part 1: Reworking the ARM GIC Emulation
The ARM Generic Interrupt Controller (GIC) provides some level of virtualization support in hardware. This still requires emulation of the distributor part, which has to integrate with the virtualization feature. Doing this in a performing and readable way is not trivial, especially the locking strategy tends to be complicated.
While extending the existing virtual GIC support in Xen to cover support for MSIs, some issues have been discovered which ask for some significant changes in the existing code.
The presentation will briefly describe the existing VGIC design and the issues we faced when trying to extend it. Based on this the changes will be presented and how they improve and ideally simplify the code.
Part 2: Xen Challenges in the ARM ITS Emulation
For being able to use MSIs on ARM systems in Xen domains we need to emulate the ARM GICv3 ITS controller. Its design is centered around a command queue located in normal system memory.
Emulating this in the Xen hypervisor brings some interesting challenges, ranging from safely accessing the guest memory and dealing with possible propagation of commands, to possible DOS attacks by domains keeping the emulation code busy.
The presentation outlines the main problems and how we hit Xen limits in emulating this correctly and efficiently. Also it presents our temporary workarounds and their drawbacks.
Mr.Mohan Babu, HPC @AMD presented on the spack basics and HPC containers. He covered the basics of spack, its concepts and creation and the containers in HPC.
Kvm performance optimization for ubuntuSim Janghoon
?
This document discusses various techniques for optimizing KVM performance on Linux systems. It covers CPU and memory optimization through techniques like vCPU pinning, NUMA affinity, transparent huge pages, KSM, and virtio_balloon. For networking, it discusses vhost-net, interrupt handling using MSI/MSI-X, and NAPI. It also covers block device optimization through I/O scheduling, cache mode, and asynchronous I/O. The goal is to provide guidance on configuring these techniques for workloads running in KVM virtual machines.
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
?
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
In this deck from the 2019 Stanford HPC Conference, Todd Gamblin from Lawrence Livermore National Laboratory presents: Spack - A Package Manager for HPC.
"Spack is a package manager for cluster users, developers and administrators. Rapidly gaining popularity in the HPC community, like other HPC package managers, Spack was designed to build packages from source. Spack supports relocatable binaries for specific OS releases, target architectures, MPI implementations, and other very fine-grained build options.
This talk will introduce some of the open infrastructure for distributing packages, challenges to providing binaries for a large package ecosystem and what we're doing to address problems. We'll also talk about challenges for implementing relocatable binaries with a multi-compiler system like Spack. Finally, we'll talk about how Spack integrates with the US Exascale project's open source software release plan and how this will help glue together the HPC OSS ecosystem.
Todd is a computer scientist in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory. His research focuses on scalable tools for measuring, analyzing, and visualizing performance data from massively parallel applications. Todd is also involved with many production projects at LLNL. He works with Livermore Computing¨s Development Environment Group to build tools that allow users to deploy, run, debug, and optimize their software for machines with million-way concurrency.
Todd received his Ph.D. in computer science from the University of North Carolina at Chapel Hill in 2009. His dissertation investigated parallel methods for compressing and sampling performance measurements from hundreds of thousands of concurrent processors. He received his B.A. in Computer Science and Japanese from Williams College in 2002. He has also worked as a software developer in Tokyo and held research internships at the University of Tokyo and IBM Research.
Watch the video: https://youtu.be/DhUVbroMLJY
Learn more: https://computation.llnl.gov/projects/spack-hpc-package-manager
and
http://hpcadvisorycouncil.com/events/2019/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Achieving the ultimate performance with KVM ShapeBlue
?
This document summarizes an presentation about achieving ultimate performance with KVM. It discusses optimizing hardware, CPU, memory, networking, and storage for virtual machines. The goal is the lowest cost per delivered resource while meeting performance targets. Specific optimizations mentioned include CPU pinning, huge pages, SR-IOV networking, virtio drivers, and bypassing the host for storage. It cautions that many performance claims use unrealistic benchmarks and hardware configurations unlike real-world usage.
At Percona Live in April 2016, Red Hat's Kyle Bader reviewed the general architecture of Ceph and then discussed the results of a series of benchmarks done on small to mid-size Ceph clusters, which led to the development of prescriptive guidance around tuning Ceph storage nodes (OSDs).
Sony R&D Center has been though robotics history and products for years. As robotics platform and Robotics Operating System (ROS) getting matured, there is a requirement to handle the distributed system integration. Using Kubernetes on edge cluster system, there are a lot of advantages such as application lifecycle, deployment and recovery. Also using CNI and ROS Data Distributed System, it can construct distributed system on edge cluster, so that multiple robots can connect directedly and work collaboratively for the specific task. We will share how we can use Kubernetes on edge including deployment robotics application and possible problems based on our experience. Furthermore, we will share our approach to support edge dependent platform with device-plugin to attach hardware resources and even virtual devices which access to the host system such as 3rd party application.
eBPF is an exciting new technology that is poised to transform Linux performance engineering. eBPF enables users to dynamically and programatically trace any kernel or user space code path, safely and efficiently. However, understanding eBPF is not so simple. The goal of this talk is to give audiences a fundamental understanding of eBPF, how it interconnects existing Linux tracing technologies, and provides a powerful aplatform to solve any Linux performance problem.
This document summarizes eBay's operationalization of OVN at scale as their preferred SDN solution. Some key points:
1. eBay migrated from a legacy vendor SDN to OVN for improved scalability, open source benefits, and reduced vendor lock-in. OVN is used for OpenStack VMs, Kubernetes, and load balancing.
2. Typical OVN deployments at eBay include 25+ routers, 10k+ ports, 35k+ MAC bindings, and 1k+ hypervisors per availability zone. Control planes use a 3 node Raft cluster for high availability.
3. Migration from the legacy SDN to OVN was done gradually by workload type to minimize impact. Some surprises included
Advanced Model Inferencing leveraging Kubeflow Serving, KNative and IstioAnimesh Singh
?
Model Inferencing use cases are becoming a requirement for models moving into the next phase of production deployments. More and more users are now encountering use cases around canary deployments, scale-to-zero or serverless characteristics. And then there are also advanced use cases coming around model explainability, including A/B tests, ensemble models, multi-armed bandits, etc.
In this talk, the speakers are going to detail how to handle these use cases using Kubeflow Serving and the native Kubernetes stack which is Istio and Knative. Knative and Istio help with autoscaling, scale-to-zero, canary deployments to be implemented, and scenarios where traffic is optimized to the best performing models. This can be combined with KNative eventing, Istio observability stack, KFServing Transformer to handle pre/post-processing and payload logging which consequentially can enable drift and outlier detection to be deployed. We will demonstrate where currently KFServing is, and where it's heading towards.
This document provides information about Fortinet and FortiGate network security appliances. It introduces Fortinet as a company specializing in network and information security, founded in 2000. It then describes some of Fortinet's key products and certifications, including FortiGate, FortiAnalyzer, and FortiManager. The document goes on to explain the FortiGate UTM concept and firewall role. It lists some FortiGate series and highlights key FortiOS features. The remainder provides instructions for deploying a FortiGate virtual machine in VMware Workstation, including network configuration steps.
Kvm performance optimization for ubuntuSim Janghoon
?
This document discusses various techniques for optimizing KVM performance on Linux systems. It covers CPU and memory optimization through techniques like vCPU pinning, NUMA affinity, transparent huge pages, KSM, and virtio_balloon. For networking, it discusses vhost-net, interrupt handling using MSI/MSI-X, and NAPI. It also covers block device optimization through I/O scheduling, cache mode, and asynchronous I/O. The goal is to provide guidance on configuring these techniques for workloads running in KVM virtual machines.
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
?
Architecting a private cloud to meet the use cases of its users can be a daunting task. How do you determine which of the many L2/L3 Neutron plugins and drivers to implement? Does network performance outweigh reliability? Are overlay networks just as performant as VLAN networks? The answers to these questions will drive the appropriate technology choice.
In this presentation, we will look at many of the common drivers built around the ML2 framework, including LinuxBridge, OVS, OVS+DPDK, SR-IOV, and more, and will provide performance data to help drive decisions around selecting a technology that's right for the situation. We will discuss our experience with some of these technologies, and the pros and cons of one technology over another in a production environment.
In this deck from the 2019 Stanford HPC Conference, Todd Gamblin from Lawrence Livermore National Laboratory presents: Spack - A Package Manager for HPC.
"Spack is a package manager for cluster users, developers and administrators. Rapidly gaining popularity in the HPC community, like other HPC package managers, Spack was designed to build packages from source. Spack supports relocatable binaries for specific OS releases, target architectures, MPI implementations, and other very fine-grained build options.
This talk will introduce some of the open infrastructure for distributing packages, challenges to providing binaries for a large package ecosystem and what we're doing to address problems. We'll also talk about challenges for implementing relocatable binaries with a multi-compiler system like Spack. Finally, we'll talk about how Spack integrates with the US Exascale project's open source software release plan and how this will help glue together the HPC OSS ecosystem.
Todd is a computer scientist in the Center for Applied Scientific Computing at Lawrence Livermore National Laboratory. His research focuses on scalable tools for measuring, analyzing, and visualizing performance data from massively parallel applications. Todd is also involved with many production projects at LLNL. He works with Livermore Computing¨s Development Environment Group to build tools that allow users to deploy, run, debug, and optimize their software for machines with million-way concurrency.
Todd received his Ph.D. in computer science from the University of North Carolina at Chapel Hill in 2009. His dissertation investigated parallel methods for compressing and sampling performance measurements from hundreds of thousands of concurrent processors. He received his B.A. in Computer Science and Japanese from Williams College in 2002. He has also worked as a software developer in Tokyo and held research internships at the University of Tokyo and IBM Research.
Watch the video: https://youtu.be/DhUVbroMLJY
Learn more: https://computation.llnl.gov/projects/spack-hpc-package-manager
and
http://hpcadvisorycouncil.com/events/2019/stanford-workshop/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Achieving the ultimate performance with KVM ShapeBlue
?
This document summarizes an presentation about achieving ultimate performance with KVM. It discusses optimizing hardware, CPU, memory, networking, and storage for virtual machines. The goal is the lowest cost per delivered resource while meeting performance targets. Specific optimizations mentioned include CPU pinning, huge pages, SR-IOV networking, virtio drivers, and bypassing the host for storage. It cautions that many performance claims use unrealistic benchmarks and hardware configurations unlike real-world usage.
At Percona Live in April 2016, Red Hat's Kyle Bader reviewed the general architecture of Ceph and then discussed the results of a series of benchmarks done on small to mid-size Ceph clusters, which led to the development of prescriptive guidance around tuning Ceph storage nodes (OSDs).
Sony R&D Center has been though robotics history and products for years. As robotics platform and Robotics Operating System (ROS) getting matured, there is a requirement to handle the distributed system integration. Using Kubernetes on edge cluster system, there are a lot of advantages such as application lifecycle, deployment and recovery. Also using CNI and ROS Data Distributed System, it can construct distributed system on edge cluster, so that multiple robots can connect directedly and work collaboratively for the specific task. We will share how we can use Kubernetes on edge including deployment robotics application and possible problems based on our experience. Furthermore, we will share our approach to support edge dependent platform with device-plugin to attach hardware resources and even virtual devices which access to the host system such as 3rd party application.
eBPF is an exciting new technology that is poised to transform Linux performance engineering. eBPF enables users to dynamically and programatically trace any kernel or user space code path, safely and efficiently. However, understanding eBPF is not so simple. The goal of this talk is to give audiences a fundamental understanding of eBPF, how it interconnects existing Linux tracing technologies, and provides a powerful aplatform to solve any Linux performance problem.
This document summarizes eBay's operationalization of OVN at scale as their preferred SDN solution. Some key points:
1. eBay migrated from a legacy vendor SDN to OVN for improved scalability, open source benefits, and reduced vendor lock-in. OVN is used for OpenStack VMs, Kubernetes, and load balancing.
2. Typical OVN deployments at eBay include 25+ routers, 10k+ ports, 35k+ MAC bindings, and 1k+ hypervisors per availability zone. Control planes use a 3 node Raft cluster for high availability.
3. Migration from the legacy SDN to OVN was done gradually by workload type to minimize impact. Some surprises included
Advanced Model Inferencing leveraging Kubeflow Serving, KNative and IstioAnimesh Singh
?
Model Inferencing use cases are becoming a requirement for models moving into the next phase of production deployments. More and more users are now encountering use cases around canary deployments, scale-to-zero or serverless characteristics. And then there are also advanced use cases coming around model explainability, including A/B tests, ensemble models, multi-armed bandits, etc.
In this talk, the speakers are going to detail how to handle these use cases using Kubeflow Serving and the native Kubernetes stack which is Istio and Knative. Knative and Istio help with autoscaling, scale-to-zero, canary deployments to be implemented, and scenarios where traffic is optimized to the best performing models. This can be combined with KNative eventing, Istio observability stack, KFServing Transformer to handle pre/post-processing and payload logging which consequentially can enable drift and outlier detection to be deployed. We will demonstrate where currently KFServing is, and where it's heading towards.
This document provides information about Fortinet and FortiGate network security appliances. It introduces Fortinet as a company specializing in network and information security, founded in 2000. It then describes some of Fortinet's key products and certifications, including FortiGate, FortiAnalyzer, and FortiManager. The document goes on to explain the FortiGate UTM concept and firewall role. It lists some FortiGate series and highlights key FortiOS features. The remainder provides instructions for deploying a FortiGate virtual machine in VMware Workstation, including network configuration steps.
The document discusses recent trends in information technology. It begins by introducing the author, Anwar Fathalla Ahmed, and his background working in information security.
It then outlines some of the key concepts in information security, including defining security, the components that make up an information system, and how to balance security and access. Critical characteristics of information are discussed, such as availability, accuracy, and confidentiality. Models for conceptualizing the security of an information system are presented.
This presentation discusses cyber security and cyber crimes. It defines cyber security as the technologies and processes used to protect computers, networks, and data from unauthorized access and attacks. It explains the need for security to protect organizations' ability to function safely and protect collected data. Cyber crimes are described as any crimes involving computers and networks, and include computer viruses, denial of service attacks, malware, fraud, and identity theft. The presentation provides an overview of cyber threat evolution over time and the top countries where malicious code originates. It concludes with recommendations for cyber security measures that can be implemented on a campus network, such as virus filtering, firewalls, and using free anti-virus, encryption, and change management software.
presentation on cyber crime and securityAlisha Korpal
?
This document discusses various types of cybercrimes and cybersecurity issues. It defines cybercrimes as crimes committed using computers and the internet, such as identity theft. It then provides statistics on common types of cyber attacks like financial fraud, sabotage of networks, and viruses. The document also discusses specific cybercrimes like hacking, child pornography, denial of service attacks, and software piracy. It concludes by offering tips for improving cybersecurity, such as using antivirus software and firewalls, and maintaining safe internet practices.
Information security involves protecting information systems, hardware, and data from unauthorized access, use, disclosure, disruption, modification, inspection, recording or destruction. The primary goals of information security, known as the CIA triad, are confidentiality, integrity and availability. Information is classified into different types like public, private, confidential and secret depending on who can access it and the potential damage of unauthorized access. Security also involves protecting physical items, individuals, operations, communications, networks and information assets.
Cyber security involves protecting computers, networks, programs and data from unauthorized access and cyber attacks. It includes communication security, network security and information security to safeguard organizational assets. Cyber crimes are illegal activities that use digital technologies and networks, and include hacking, data and system interference, fraud, and illegal device usage. Some early forms of cyber crime date back to the 1970s. Maintaining antivirus software, firewalls, backups and strong passwords can help protect against cyber threats while being mindful of privacy and security settings online. The document provides an overview of cyber security, cyber crimes, their history and basic safety recommendations.
This document provides an overview of information security. It defines information and discusses its lifecycle and types. It then defines information security and its key components - people, processes, and technology. It discusses threats to information security and introduces ISO 27001, the international standard for information security management. The document outlines ISO 27001's history, features, PDCA process, domains, and some key control clauses around information security policy, organization of information security, asset management, and human resources security.