This document provides an example of how to configure workload management (WLM) classes on an AIX system based on business priorities for a banking workload. It describes setting up WLM classes mapped to different business processes and database instances, with rules for static and dynamic classification of processes into the classes. Processes are classified into classes like "biz_critical", "biz_important", and "biz_regular" based on their importance to the business, and resources are prioritized accordingly.
The document discusses the experiences of The Harford Financial Services Group in implementing workload management goals using IBM's Workload Manager (WLM) to manage transaction response times for their CICS and IMS environments. They analyzed transaction profiles, defined service classes and goals, monitored performance, and made adjustments to improve response times and ensure goals were being met during high-volume periods such as month-end processing. For stored procedures, they classified enclaves based on origination point and made changes to improve IMS transaction response times that were being impacted.
This document discusses adding artificial intelligence capabilities to workload managers like IBM's AIX Work Load Manager (WLM) to help address system performance problems. It proposes using monitoring data and fuzzy logic rules to detect issues, identify problematic processes, and dynamically reschedule processes to prioritize important services. Existing system instrumentation and soft computing tools could be integrated with Perl to implement this. However, these ideas are theoretical and soft computing approaches are not widely known or accepted. The goal is to give workload managers more "brains" to autonomously address performance problems based on gathered data and expert knowledge encoded as fuzzy rules.
The document discusses using DB2 Workload Manager (WLM) to improve database performance issues on a hybrid multi-terabyte database. WLM maintains service level agreements, prevents resource hogging, optimizes the system, and controls, monitors, and analyzes database activity. The implementation of WLM included creating service classes, thresholds, and data tags to prioritize sponsor queries and reports for priority data, resolving failures and slow performance. This led to streamlined resource usage and happy customers.
These slides were presented during technical event at my organization. It focuses on overview to find a root cause of the unexpected system down events. It is mainly useful for Linux or Unix system administrators. Here, I tried to cover all aspects of the topic. It took me more than 2 hours to present these slides, but one can also cover these slides within short time-span. Gray background of slides is implemented to hide the company logo and to preserve the confidentially of private template. However, The Knowledge is not restricted :)
Vijay Adik presented configuration best practices for optimizing Oracle performance on AIX systems. He discussed tuning the memory, I/O, network, and miscellaneous settings. Specifically, he recommended modifying the virtual memory manager settings such as maxperm%, minperm%, strict_maxclient, and lru_file_repage to prevent paging of computational memory and allow the file cache to grow. Adik also emphasized that ongoing monitoring is needed to identify any new bottlenecks after changes are made.
The document discusses troubleshooting communications manager crashes, cores, and service restarts. It covers identifying application coredumps, generating backtraces from core files, searching technical topics, and troubleshooting unresolved coredumps. It also discusses troubleshooting services that fail to start up, symptoms of database problems, server freezes, and using dmesg to view kernel messages. The key aspects covered are debugging core files, analyzing logs and performance data to determine the root cause of crashes, and resolving issues that prevent critical services from starting.
The document discusses techniques and tools for optimizing Rails applications. It covers topics like benchmarking tools, caching, session storage options, and common performance issues in Rails like slow helper methods and associations. The document provides recommendations on optimizing actions, views, and controllers in Rails.
Must Read HP Data Protector Interview QuestionsLaxman J
油
This Tutorial especially collected for who searching for Exact Interview Question - Must Read HP Data Protector Interview Questions. Chec more Details at - <a>Dealdimer</a>
<a>Technical Help</a>
The document discusses batch file programming and various ways batch files can be used to create utilities, funny programs, and viruses that harm Windows machines. It provides examples of batch file code that can create undeleteable folders, continuously restart a system, corrupt files using for loops, and more. The document also covers basic batch file structure, operators, and ways to prevent virus attacks through batch files.
Security Challenges of Antivirus Engines, Products and SystemsAntiy Labs
油
This document discusses security challenges faced by antivirus engines, products, and systems. It notes that antivirus systems are vulnerable to malware just like other software. The document outlines threats including rootkits that can hijack antivirus software processes, format vulnerabilities that can crash engines, and privilege escalation issues. It discusses improving input validation, privilege control, testing, and secure code development to address these challenges. The goal is for antivirus software to remain vigilant against emerging threats through continued research and responsiveness.
In this PowerPoint, learn how a security policy can be your first line of defense. Servers running AIX and other operating systems are frequent targets of cyberattacks, according to the Data Breach Investigations Report. From DoS attacks to malware, attackers have a variety of strategies at their disposal. Having a security policy in place makes it easier to ensure you have appropriate controls in place to protect mission-critical data.
Talk from Embedded Linux Conference, http://elcabs2015.sched.org/event/551ba3cdefe2d37c478810ef47d4ca4c?iframe=no&w=i:0;&sidebar=yes&bg=no#.VRUCknSQQQs
This document provides an agenda and slides for a PowerShell presentation. The agenda covers PowerShell basics, file systems, users and access control, event logs, and system management. The slides introduce PowerShell, discuss cmdlets and modules, and demonstrate various administrative tasks like managing files, users, services, and the firewall using PowerShell. The presentation aims to show how PowerShell can be used for both system administration and security/blue team tasks.
1. What is the value of requiring the OS to provide status informati.pdfudit652068
油
1. What is the value of requiring the OS to provide status information?
2. What is the difference between a true layered structure and the way that MS-DOS
used layering?
3. Why is an operating system thought to be a \"mandatory middleman\"?
則 be able to explain the services and value of this
4. What is a virtual machine and why is it necessary?
則 How does it work (i.e., be able to discuss and/or draw a VM structure in a
computer system)
5. Why is debugging a concern for an OS?
則 How can it be accomplished?
6. Why is a bootstrap loader needed?
Solution
Ans 1. In context switching among process before a process switched we have to store
PCB(Process control Block) by the Operating System.It consists of Process State,Program
Counter,Values of Registers,CPU Sheduling Information,Memeory Management
Information,Accounting Information and IO Status Information.
Value of Status Information is such as How much devices are allocated /occupied,Open File
Tables information etc.
Ans.2
In MS-DOS
Application Program -> Resident system Program->MS-DOS Device Drivers-
>ROM-BIOS Device Drivers
This architecture is applied. There is no well-structured architecture is defined. There is no
CPU Execution Mode (Kernel and User) So if there is error whole system is crashed.
In case of Layered approach it follows modular approach. OS is broken into the layer Bottom
Layer which is hardware and Top Layer is User. Its main advantage is simplicity of construction
and debugging. If error is found at any layer it remains same on that layer system does not crash.
Ans 3.
Operating system work as a middleman between a user and computer hardware.Its main
objective to make system convenient to use and utilitze computer hardware in efficient
manner.Variuos types of OS are there such as-INIX,MS-DOS,Windows-98/XP/Vista,Windows-
NT/2000,OS/2 and Mac OS.
It provides its service to user as well as Programs too:
To Program it provide environment to exceute .
To user provide platform to execute the program.
These are following services provided by OS:-
6.) Error Detection-
Ans 4:
Virtual Machine- It is based on computer architecture, it is an emulation of a computer system.It
also provides the functionality of physical computer Too.
It is of following type like:-
Advantage Of Virtual Machine:-
Architecture:-
Guest Operating System and Application
|
Virtual Machine
|
Virtual Server 2005
|
Windows Server 2003(Host OS)
|
Physical Computer
Ans.5
Debugging is a concern for an OS.As it made up of multi layered architecture so it is easier to
find at which layer error is prone .There is two mode for debugging User Mode and Kernal
Mode.Kernal mode debugging is very hard. Because we can not rely on crashing machine to
communicate that what happened.
There are four methods of debugging an operating system:-
Sanity Checks
Debugger
Deterministic Reply
Moving Everything to User Space
Ans 6.)Bootstrap Loader:- It is a program that is required to loads an operating system after
completion on power-on .
How to debug systemd problems fedora projectSusant Sahani
油
This document provides instructions for debugging problems with the Systemd startup process in Fedora. It recommends checking the common bugs document first before filing a bug report. It then lists several useful Systemd commands for investigating services, targets, and the boot process. Finally it outlines some boot parameters that can help with debugging boot issues.
This document summarizes the Linux audit system and proposes improvements. It begins with an overview of auditd and how audit messages are generated and processed in the kernel. Issues with auditd's performance, output format, and filtering are discussed. An alternative approach is proposed that uses libmnl for netlink handling, groups related audit messages into JSON objects, applies Lua-based filtering, and supports multiple output types like ZeroMQ and syslog. Benchmark results show this rewrite reduces CPU usage compared to auditd. The document advocates for continued abstraction and integration of additional data sources while avoiding feature creep.
System Programming
Deamon Process
A daemon process is a process which runs in background and has no controlling terminal. A daemon (also known as background processes) is UNIX and it is known as services and agents in windows. Since a daemon process usually has no controlling terminal so almost no user interaction is required. Daemon processes are used to provide services that can well be done in background without any user interaction. Daemons are processes that are often started when the system is bootstrapped (boot time) and terminate only when the system is shut down.
The document provides an overview of the history and structure of Linux. It discusses how Linux uses the GPL open source license and describes the basic boot process. It also lists some common qualifications for Linux administrator jobs and provides tips for using and administering Linux systems securely and effectively.
SaltConf14 - Ben Cane - Using SaltStack in High Availability EnvironmentsSaltStack
油
An overview on the benefits and best practices of using SaltStack for consistency and automation in highly available enterprise environments such as financial services.
Inspection and maintenance tools (Linux / OpenStack)Gerard Braad
油
This handout is part of the training at UnitedStack and will introduce you to several inspection and maintenance tools.
It is generated from the slides at: http://gbraad.gitlab.io/tools-training/
Source: https://gitlab.com/gbraad/tools-training
This document contains the answers to homework questions for the CSE-316 Operating Systems course submitted by Tej Prakash with student ID 10803816. The homework answers security issues in multiprogramming systems, circumstances for using time-sharing over a single-user system, differences between modular kernel and layered approaches in operating system design, factors for choosing a host operating system, the role of the kernel in process interaction, and advantages and disadvantages of synchronous/asynchronous communication, automatic/explicit buffering, send by copy/reference, and fixed-sized/variable-sized messages.
User-data allows scripts to run on instance bootup, enabling automated configuration. IndexMedia improved deployment time from 30 minutes to 90 seconds by splitting scripts into static and instance-specific parts. AutoScale automatically launches and terminates instances to maintain performance within specified bounds based on metrics like CPU utilization. With just four commands, IndexMedia set up an AutoScale group with a scaling policy and alarm to dynamically scale their fleet based on load, solving their problem of maintaining consistent user experience.
This document discusses different memory management techniques used in operating systems. It begins by describing the basic components and functions of memory. It then explains various memory management algorithms like overlays, swapping, paging and segmentation. Overlays divide a program into instruction sets that are loaded and unloaded as needed. Swapping loads entire processes into memory for execution then writes them back to disk. Paging and segmentation are used to map logical addresses to physical addresses through page tables and segment tables respectively. The document compares advantages and limitations of these approaches.
This document discusses processes in Linux. It defines a process as a running instance of a program in memory that is allocated space for variables and instructions. All processes are descended from the systemd process. It describes process states like running, sleeping, stopped, and zombie. It also discusses process monitoring and management tools like top, ps, kill, and setting process priorities with nice and renice. Examples are provided on using ps to view specific processes by user, name, ID, parent ID, and customize the output.
Learn about Linux on System z debugging with Valgrind, one of the most prominent debugging tools for Linux.For more information, visit http://ibm.co/PNo9Cb.
This document provides an overview of kernel tuning and customizing for performance on Enterprise Linux. It discusses monitoring tools, basic tuning steps like disabling unused services, memory tuning including hugepages and transparent huge pages, swap/cache tuning. It also covers I/O and filesystem tuning and networking tuning. The goal is to provide concepts and approaches for tuning the major components to optimize performance.
This document discusses principles for hardening a Liferay portal implementation. It describes hardening at the network, server, and application levels. At the network level, it recommends restricting connections and using a proxy. At the server level, it recommends securing server administration and disabling unnecessary services. At the application level, it provides 17 specific recommendations for hardening the Liferay application through configuration and plugins. These include removing demo data, changing default accounts, keeping systems patched, and implementing features like the audit plugin and log rotation. It concludes by noting that the full implementation requires hardening other system components like the web server, application server, and database.
6414 preparation and planning of the development of a proficiency test in the...Damir Delija
油
This document discusses the preparation and planning for developing a proficiency test in digital forensics using a Greyp electric bicycle. It outlines the planned project phases including creating scenarios, making forensic copies, collecting and evaluating results, and creating and distributing the test. Preliminary analyses of the bicycle have been conducted using various forensic tools to identify and validate digital artifacts that could be used for the test. While work has faced delays due to COVID-19, initial results suggest there are sufficient artifacts across the bicycle and associated devices and cloud storage to form the basis of a useful proficiency test.
Must Read HP Data Protector Interview QuestionsLaxman J
油
This Tutorial especially collected for who searching for Exact Interview Question - Must Read HP Data Protector Interview Questions. Chec more Details at - <a>Dealdimer</a>
<a>Technical Help</a>
The document discusses batch file programming and various ways batch files can be used to create utilities, funny programs, and viruses that harm Windows machines. It provides examples of batch file code that can create undeleteable folders, continuously restart a system, corrupt files using for loops, and more. The document also covers basic batch file structure, operators, and ways to prevent virus attacks through batch files.
Security Challenges of Antivirus Engines, Products and SystemsAntiy Labs
油
This document discusses security challenges faced by antivirus engines, products, and systems. It notes that antivirus systems are vulnerable to malware just like other software. The document outlines threats including rootkits that can hijack antivirus software processes, format vulnerabilities that can crash engines, and privilege escalation issues. It discusses improving input validation, privilege control, testing, and secure code development to address these challenges. The goal is for antivirus software to remain vigilant against emerging threats through continued research and responsiveness.
In this PowerPoint, learn how a security policy can be your first line of defense. Servers running AIX and other operating systems are frequent targets of cyberattacks, according to the Data Breach Investigations Report. From DoS attacks to malware, attackers have a variety of strategies at their disposal. Having a security policy in place makes it easier to ensure you have appropriate controls in place to protect mission-critical data.
Talk from Embedded Linux Conference, http://elcabs2015.sched.org/event/551ba3cdefe2d37c478810ef47d4ca4c?iframe=no&w=i:0;&sidebar=yes&bg=no#.VRUCknSQQQs
This document provides an agenda and slides for a PowerShell presentation. The agenda covers PowerShell basics, file systems, users and access control, event logs, and system management. The slides introduce PowerShell, discuss cmdlets and modules, and demonstrate various administrative tasks like managing files, users, services, and the firewall using PowerShell. The presentation aims to show how PowerShell can be used for both system administration and security/blue team tasks.
1. What is the value of requiring the OS to provide status informati.pdfudit652068
油
1. What is the value of requiring the OS to provide status information?
2. What is the difference between a true layered structure and the way that MS-DOS
used layering?
3. Why is an operating system thought to be a \"mandatory middleman\"?
則 be able to explain the services and value of this
4. What is a virtual machine and why is it necessary?
則 How does it work (i.e., be able to discuss and/or draw a VM structure in a
computer system)
5. Why is debugging a concern for an OS?
則 How can it be accomplished?
6. Why is a bootstrap loader needed?
Solution
Ans 1. In context switching among process before a process switched we have to store
PCB(Process control Block) by the Operating System.It consists of Process State,Program
Counter,Values of Registers,CPU Sheduling Information,Memeory Management
Information,Accounting Information and IO Status Information.
Value of Status Information is such as How much devices are allocated /occupied,Open File
Tables information etc.
Ans.2
In MS-DOS
Application Program -> Resident system Program->MS-DOS Device Drivers-
>ROM-BIOS Device Drivers
This architecture is applied. There is no well-structured architecture is defined. There is no
CPU Execution Mode (Kernel and User) So if there is error whole system is crashed.
In case of Layered approach it follows modular approach. OS is broken into the layer Bottom
Layer which is hardware and Top Layer is User. Its main advantage is simplicity of construction
and debugging. If error is found at any layer it remains same on that layer system does not crash.
Ans 3.
Operating system work as a middleman between a user and computer hardware.Its main
objective to make system convenient to use and utilitze computer hardware in efficient
manner.Variuos types of OS are there such as-INIX,MS-DOS,Windows-98/XP/Vista,Windows-
NT/2000,OS/2 and Mac OS.
It provides its service to user as well as Programs too:
To Program it provide environment to exceute .
To user provide platform to execute the program.
These are following services provided by OS:-
6.) Error Detection-
Ans 4:
Virtual Machine- It is based on computer architecture, it is an emulation of a computer system.It
also provides the functionality of physical computer Too.
It is of following type like:-
Advantage Of Virtual Machine:-
Architecture:-
Guest Operating System and Application
|
Virtual Machine
|
Virtual Server 2005
|
Windows Server 2003(Host OS)
|
Physical Computer
Ans.5
Debugging is a concern for an OS.As it made up of multi layered architecture so it is easier to
find at which layer error is prone .There is two mode for debugging User Mode and Kernal
Mode.Kernal mode debugging is very hard. Because we can not rely on crashing machine to
communicate that what happened.
There are four methods of debugging an operating system:-
Sanity Checks
Debugger
Deterministic Reply
Moving Everything to User Space
Ans 6.)Bootstrap Loader:- It is a program that is required to loads an operating system after
completion on power-on .
How to debug systemd problems fedora projectSusant Sahani
油
This document provides instructions for debugging problems with the Systemd startup process in Fedora. It recommends checking the common bugs document first before filing a bug report. It then lists several useful Systemd commands for investigating services, targets, and the boot process. Finally it outlines some boot parameters that can help with debugging boot issues.
This document summarizes the Linux audit system and proposes improvements. It begins with an overview of auditd and how audit messages are generated and processed in the kernel. Issues with auditd's performance, output format, and filtering are discussed. An alternative approach is proposed that uses libmnl for netlink handling, groups related audit messages into JSON objects, applies Lua-based filtering, and supports multiple output types like ZeroMQ and syslog. Benchmark results show this rewrite reduces CPU usage compared to auditd. The document advocates for continued abstraction and integration of additional data sources while avoiding feature creep.
System Programming
Deamon Process
A daemon process is a process which runs in background and has no controlling terminal. A daemon (also known as background processes) is UNIX and it is known as services and agents in windows. Since a daemon process usually has no controlling terminal so almost no user interaction is required. Daemon processes are used to provide services that can well be done in background without any user interaction. Daemons are processes that are often started when the system is bootstrapped (boot time) and terminate only when the system is shut down.
The document provides an overview of the history and structure of Linux. It discusses how Linux uses the GPL open source license and describes the basic boot process. It also lists some common qualifications for Linux administrator jobs and provides tips for using and administering Linux systems securely and effectively.
SaltConf14 - Ben Cane - Using SaltStack in High Availability EnvironmentsSaltStack
油
An overview on the benefits and best practices of using SaltStack for consistency and automation in highly available enterprise environments such as financial services.
Inspection and maintenance tools (Linux / OpenStack)Gerard Braad
油
This handout is part of the training at UnitedStack and will introduce you to several inspection and maintenance tools.
It is generated from the slides at: http://gbraad.gitlab.io/tools-training/
Source: https://gitlab.com/gbraad/tools-training
This document contains the answers to homework questions for the CSE-316 Operating Systems course submitted by Tej Prakash with student ID 10803816. The homework answers security issues in multiprogramming systems, circumstances for using time-sharing over a single-user system, differences between modular kernel and layered approaches in operating system design, factors for choosing a host operating system, the role of the kernel in process interaction, and advantages and disadvantages of synchronous/asynchronous communication, automatic/explicit buffering, send by copy/reference, and fixed-sized/variable-sized messages.
User-data allows scripts to run on instance bootup, enabling automated configuration. IndexMedia improved deployment time from 30 minutes to 90 seconds by splitting scripts into static and instance-specific parts. AutoScale automatically launches and terminates instances to maintain performance within specified bounds based on metrics like CPU utilization. With just four commands, IndexMedia set up an AutoScale group with a scaling policy and alarm to dynamically scale their fleet based on load, solving their problem of maintaining consistent user experience.
This document discusses different memory management techniques used in operating systems. It begins by describing the basic components and functions of memory. It then explains various memory management algorithms like overlays, swapping, paging and segmentation. Overlays divide a program into instruction sets that are loaded and unloaded as needed. Swapping loads entire processes into memory for execution then writes them back to disk. Paging and segmentation are used to map logical addresses to physical addresses through page tables and segment tables respectively. The document compares advantages and limitations of these approaches.
This document discusses processes in Linux. It defines a process as a running instance of a program in memory that is allocated space for variables and instructions. All processes are descended from the systemd process. It describes process states like running, sleeping, stopped, and zombie. It also discusses process monitoring and management tools like top, ps, kill, and setting process priorities with nice and renice. Examples are provided on using ps to view specific processes by user, name, ID, parent ID, and customize the output.
Learn about Linux on System z debugging with Valgrind, one of the most prominent debugging tools for Linux.For more information, visit http://ibm.co/PNo9Cb.
This document provides an overview of kernel tuning and customizing for performance on Enterprise Linux. It discusses monitoring tools, basic tuning steps like disabling unused services, memory tuning including hugepages and transparent huge pages, swap/cache tuning. It also covers I/O and filesystem tuning and networking tuning. The goal is to provide concepts and approaches for tuning the major components to optimize performance.
This document discusses principles for hardening a Liferay portal implementation. It describes hardening at the network, server, and application levels. At the network level, it recommends restricting connections and using a proxy. At the server level, it recommends securing server administration and disabling unnecessary services. At the application level, it provides 17 specific recommendations for hardening the Liferay application through configuration and plugins. These include removing demo data, changing default accounts, keeping systems patched, and implementing features like the audit plugin and log rotation. It concludes by noting that the full implementation requires hardening other system components like the web server, application server, and database.
6414 preparation and planning of the development of a proficiency test in the...Damir Delija
油
This document discusses the preparation and planning for developing a proficiency test in digital forensics using a Greyp electric bicycle. It outlines the planned project phases including creating scenarios, making forensic copies, collecting and evaluating results, and creating and distributing the test. Preliminary analyses of the bicycle have been conducted using various forensic tools to identify and validate digital artifacts that could be used for the test. While work has faced delays due to COVID-19, initial results suggest there are sufficient artifacts across the bicycle and associated devices and cloud storage to form the basis of a useful proficiency test.
Uvoenje novih sadr転aja u nastavu digitalne forenzike i kibernetike sigurnos...Damir Delija
油
Sa転etak - U ovom radu razmatramo naine kontinuiranog uvoenje novih sadr転aja u predmete s podruja kibernetike sigurnosti. Kao primjer navodimo Osnove raunalne forenzike u koji se novi sadr転aji uvode kori邸tenjem studentskih praktinih i teoretskih radova, ideje za radove predla転u studenti i predavai. Predlo転eni postupak se sastoji iz testiranja kroz studentski rad, te ugradnje rezultata u nastavne materijale. Da bi se studentski rad uspje邸no koristio mora zadovoljiti niz zahtjeva: prilagoenost stupnju znanja studenta i raspolo転ivoj opremi, raspolo転ivost alata i sustava, jednostavna implementacija i prenosivost, upotreba alata otvorenog koda i slobodnih alata, te minimalna cijena.
Remote forensics involves acquiring digital evidence from remote devices or locations without physical access. It includes applications like electronic discovery, incident response, network forensics, and cloud forensics. While often understood as live forensics, remote forensics also includes techniques like booting devices into forensic modes remotely or using forensic tools on remote systems to access local evidence. Enterprise-level remote forensic tools allow preventative forensics and faster incident response but are not widely used due to budget, knowledge, and legal barriers. As technology spreads and more data is stored remotely, remote forensics will become more important and perhaps even fully automated for Internet of Things devices in the future.
The document discusses EnCase Direct Network Preview, which allows an examiner to access and examine data on a powered-on computer remotely. It involves generating encryption key pairs, creating a direct servlet file using the public key, deploying the servlet on the target computer, and then connecting from the examiner's EnCase interface by providing the IP address and port. This enables viewing and analyzing the contents of drives, removable media, and memory on the live remote system without needing authentication files or passphrases if disks are encrypted.
Draft current state of digital forensic and data science Damir Delija
油
In this presentation we will introduce current state of digital forensics, its positioning in general IT security and relations with data science and data analyses. Many strong links exist among this technical and scientific fields, usually this links are not taken into consideration. For data owners, forensic researchers and investigators this connections and data views presents additional hidden values.
This document discusses reasons for disliking digital forensics and identifies areas for improvement. It begins by introducing the author's background and motivation. The document then examines issues with naming conventions, tools/practices, standards/definitions, training/certification, and subfields. Key problems highlighted include a lack of standardization, compatibility issues between tools, outdated mindsets, and insufficient computing foundations in training. The author advocates treating digital forensics as an engineering science and applying best computing practices. Overall, the document critically analyzes challenges currently facing the field and questions how these issues may impact the future if not addressed.
Concepts and Methodology in Mobile Devices Digital Forensics Education and Tr...Damir Delija
油
One of draft versios of "Concepts and Methodology in Mobile Devices Digital Forensics Education and Training",
Abstract - This paper presents various issues in digital forensics of mobile devices and how to address these issues in the related education and training process. Mobile devices forensics is a new, very fast developing field which lacks standardization, compatibility, tools, methods and skills. All this drawbacks have impact on the results of forensic process and also have deep influence in training and education process. In this paper real life experience in training is presented, with tools, devices, procedures and organization with purpose to improve process of mobile devices forensics and mobile forensic training and education
The document provides an overview of the deep web and digital investigations. It defines the deep web as data that is inaccessible to regular search engines but exists on the internet. This includes dynamically generated web pages, private websites requiring login, and files accessible only through direct filesystem access. The document estimates the deep web is 400-550 times larger than the surface web that is indexed by search engines. Standard digital forensic procedures can be applied to investigate the deep web, but tools may need to be adapted to handle specialized browsers and access methods used to retrieve deep web resources.
Datafoucs 2014 on line digital forensic investigations damir delija 2Damir Delija
油
This document discusses how to conduct on-line digital forensic investigations using EnCase Enterprise v7. It describes the key EnCase Enterprise components that enable forensically sound and secure network investigations, including the SAFE for authentication, the Examiner for examinations, and Servlets installed on remote machines. It provides steps for creating a new case, adding target nodes, conducting live previews and analyses of remote disks and RAM, and performing automated sweeps to collect files and system information from multiple machines using snapshot, file processing, and system info modules. The document emphasizes the importance of planning, monitoring sweeps, and documenting results.
The document provides an overview of the basic steps for conducting an ediscovery collection using Guidance Software's EnCase Enterprise v7. It describes installing the required EnCase Enterprise components like the SAFE, Examiner and Servlets. It then outlines how to open a new case, define the target nodes, create a collection sweep to retrieve files and metadata based on conditions, and handle the sweep results. The summary provides the essential workflow and technical components involved in performing a foundational EnCase Enterprise collection.
The document discusses how to process scanned documents in EnCase forensic software. It outlines that paper evidence needs to be converted to a digital format that forensic software can analyze. This involves scanning paper documents to create image files, then using optical character recognition (OCR) to convert those images into text files that can be indexed and searched in forensic software like EnCase. It stresses the importance of keeping the entire process forensically sound by not altering the original evidence, documenting all tools and files used, and considering metadata changes.
The document discusses using forensic preview, triage, and collection techniques with the TD3 device. It explores using these processes to complement full drive collection. Preview allows determining if a volume contains evidence, triage prioritizes investigation by reviewing data quickly, and collection fully images storage if enough evidence is found. The document outlines using the TD3 over iSCSI to remotely access storage in a forensically sound way for these processes. This enables fast review and triage to reduce data volume and close cases more efficiently. Hands-on with these techniques will be demonstrated using EnCase tools connected remotely to the TD3 during the training.
This document discusses the digital forensic tool EnCase Forensic. It provides an overview of EnCase and its features, including that it is a leading forensic tool accepted in courts. The document then outlines a scenario where EnCase will be used to conduct a forensic investigation based on a search warrant. The remainder of the document walks through the key functions and screens of EnCase like adding disk images, searching for evidence, tagging evidence, and reporting while conducting the outlined forensic investigation scenario.
Usage aspects techniques for enterprise forensics data analytics toolsDamir Delija
油
This document discusses techniques for accessing and analyzing data from enterprise forensic tools using external data analytics tools. It provides an example using the forensic tools EnCase v7 and FTK to collect disk images, memory images, and system snapshots from endpoints. While these tools store useful data, it can be difficult to extract and analyze. The document demonstrates connecting an EnCase database to an external analytics tool to allow easier viewing and analysis of process and network data across multiple snapshots. This approach could integrate forensic data with security tools like SIEM for more automated incident response.
1. Draft: Aix 5.2 WLM examples for the
banking, based on business process
prioritization
Introduction:
This is a short example of how p690 with AIX 5.2ML01 can be configured on the business
needs trough WLM. Same ideas can be applied to any type of workload management. There
are many other possibilities by dynamic and static lpars, database internal scheduling etc, it is
important to use method which will best utilize all features of OS, database, application and
workload characteristics.
Purpose:
WLM can do good job on partitioning machine into separate submachines. Such policy,
presented in this paper, is based on the business purpose of the machine. It was essential to
do, since applications were migrated from different platforms to new machine (consolidated,
to use more appropriate word) it was not 100% clear how these applications will coexist. Also
there was no formal consolidation process.
There are more than one database instance on the machine with different business importance,
resources consumed by one instance must not interfere with behavior of other instances, and
e.g. more important database instance must have some minimal volume of the resources
available on request.
It is usual that machine is organized in batch/interactive class type of organization. Because of
some advanced AIX 5.2 WLM features we decide to organize machine into classes based on
business importance, and if there is a need for further sub-classification into batch/interactive
each business class already have such subclasses defined by WLM.
It is important to notice that WLM is not protection against hogs in one class and that WLM
limitations on IO and memory can cause more harm than actual hog. So system behavior must
be closely monitored.
Methodology:
Each process is classified automatically on start by WLM based on static rules and
inheritance. Because of special nature of the oracle user processes such process actually needs
at least one dynamic reclassification after database start to achieve appropriate class.
WLM is started at the system boot and each 5 minutes goes dynamic reclassification, and
naturally class statistics are collected too.
Literature:
AIX 5L Workload Manager (WLM), sg245977, Redbook
2. Examples:
WLM classes
Priority
Class Description Limits
(tier)
Defined by system, all root process
System which are not classified in other 0 Unlimited
classes
Defined by system, all not root
process which are not classified in
Default 2 Unlimited
other classes
(bin, adm users)
Defined by system - no active
Shared 0 Unlimited
processes
soft hard
Resource min
Business critical processes, this class max max
biz_critical 1
is the purpose of the machine CPU 80% 100% 100%
diskIO 80% 100% 100%
Ordinary business processes, as soft hard
example additional databases etc Resurs min
biz_important 4 max max
which are not
primary purpose of the machine CPU 20% 100% 100%
soft hard
All processes which are not in above Resurs min
biz_regular 7 max max
classes
CPU 10% 20% 100%
Rules for dynamic reclassification (/etc/wlm/ma.conf)
# match string (instance or class inheritance
subsystem)
prod1 biz_critical yes
laguna biz_important yes
quiet biz_important yes
rman biz_important yes
Unicert1 biz_regular yes
tsm biz_regular yes
Rules for static classification (/etc/wlm/rules)
# Class User Group Application Type Tag
001 Biz_regula - - /usr/bin/tar - -
002 Biz_regula - - /usr/bin/dd - -
003 Biz_regula - - /usr/tivoli/tsm/* - -
004 System root - - - -
005 Default - - - - -
3. Dynamic reclassification: wlm-oracle.ksh script
#!/usr/bin/ksh
# Sample script to perform manual assignment of processes whose different
# Instances can be differentiated by their output in ps -ef.
#
#
# Examples of this kind of processes are ORACLE database instances.
## Create a configuration file /etc/wlm/ma.conf with the following format:
# One line for each combination of:
#
# where:
# o Instance Name is ORACLE instance.
# o Class is the name of the class to assign the processes
to;
# Either `supername' for superclasses or
`supername.subname'
# for subclasses.
# o Inheritance is a flag, which should be set to yes if you
# want all processes belonging to a process group, whose
# leader is the process being manually assigned, to be
# manually assigned too, or no, otherwise.
# MANUAL is an array of three positions, which one of them being:
# o Position 0: Instance name.
# o Position 1: Class name.
# o Position 2: Inheritance flag.
#############################################################
# Source SG24-5977-01
# 15.06.2003 Version 0
# original
# a lot of bugs ...
# wrong magic cookie, wlassign processing, space to coma etc ...
#
#############################################################
# 20.06.2003 version 1
# changes in CONF file
# there can be comment lines start with hash
# there can be empty lines
# changes in script
# assignment for each process separately - bug in argument list
# comment and empty line processing in conf file
#
#############################################################
##
# DIRECTORIES
##
WLMDIR=/etc/wlm
##
# VARIABLES
##
CONFFILE=$WLMDIR/ma.conf
4. PATH=/usr/bin:/usr/sbin:$PATH
##
# FUNCTIONS
##
getpids()
{
#error
inst="$1"
test -z "$inst" && exit 1
echo $(ps -ef | grep "$inst" | grep -v grep | awk '{ print $2 }')
}
##
# MAIN -
##
egrep -v "^#" $CONFFILE| awk 'NF>0{print}'| (while read LINE
do
set -A MANUAL $LINE
echo "Changing the inheritance attribute on class ${MANUAL[1]}..."
OLDINH=`lsclass -f ${MANUAL[1]} | grep inheritance | awk '{ print
$3 }' | sed "s/"//g"`
[ ! "$OLDINH" ] && OLDINH="no"
$ECHO chclass -a inheritance=${MANUAL[2]} ${MANUAL[1]}
echo "Refreshing WLM..."
wlmcntrl -u
echo "Manually assigning the processes to class ${MANUAL[1]}..."
echo "Getting PIDS' list for instance ${MANUAL[0]}..."
n=0
for p in $(getpids ${MANUAL[0]})
do
wlmassign -S ${MANUAL[1]} $p
n=$(expr $n + 1)
done
echo "Assigning $n processes to class ${MANUAL[1]}..."
echo "Resetting old inheritance value on class ${MANUAL[1]}..."
chclass -a inheritance="$OLDINH" ${MANUAL[1]}
echo "Refreshing WLM..."
wlmcntrl -u
done
)
#######################
##logger WLM updated
exit 0