NEWYORKSYSTRAINING are destined to offer quality IT online training and comprehensive IT consulting services with complete business service delivery orientation.
The Unified Process (UP) is a popular iterative software development framework that uses use cases, architecture-centric design, and the Unified Modeling Language. It originated from Jacobson's Objectory process in the 1980s and was further developed by Rational Software into the Rational Unified Process. The UP consists of four phases - inception, elaboration, construction, and transition - and emphasizes iterative development, architectural drivers, and risk-managed delivery.
This Presentation is about NoSQL which means Not Only SQL. This presentation covers the aspects of using NoSQL for Big Data and the differences from RDBMS.
This document discusses data types and data structures. It defines them and describes their key attributes. For data types, it covers specification, implementation, operations and examples of elementary types. For data structures, it discusses composition, organization, representation and implementation of operations. It also addresses type equivalence checking, conversion and lists several common data structures like arrays, records, lists and files.
Data warehousing and online analytical processingVijayasankariS
油
The document discusses data warehousing and online analytical processing (OLAP). It defines a data warehouse as a subject-oriented, integrated, time-variant and non-volatile collection of data used to support management decision making. It describes key concepts such as data warehouse modeling using data cubes and dimensions, extraction, transformation and loading of data, and common OLAP operations. The document also provides examples of star schemas and how they are used to model data warehouses.
This document provides an overview of key concepts related to data warehousing including what a data warehouse is, common data warehouse architectures, types of data warehouses, and dimensional modeling techniques. It defines key terms like facts, dimensions, star schemas, and snowflake schemas and provides examples of each. It also discusses business intelligence tools that can analyze and extract insights from data warehouses.
This document discusses different types of data models, including hierarchical, network, relational, and object-oriented models. It focuses on explaining the relational model. The relational model organizes data into tables with rows and columns and handles relationships using keys. It allows for simple and symmetric data retrieval and integrity through mechanisms like normalization. The relational model is well-suited for the database assignment scenario because it supports linking data across multiple tables using primary and foreign keys, and provides query capabilities through SQL.
Crystal Reports is a business intelligence tool used to design and generate reports from various data sources. It is integrated with Visual Studio .NET and can pull data from a SQL Server database using ADO.NET into a DataSet, and then transfer the data to CrystalReportViewer to display on an ASP.NET web page. There are two methods to publish data to a Crystal Report - the pull method queries the database directly, while the push method shapes the data in code before passing it to the report. Steps for creating a basic Crystal Report include connecting to a database, selecting tables, dragging fields into the report, and previewing the output.
This document provides an overview of the 3-tier data warehouse architecture. It discusses the three tiers: the bottom tier contains the data warehouse server which fetches relevant data from various data sources and loads it into the data warehouse using backend tools for extraction, cleaning, transformation and loading. The bottom tier also contains the data marts and metadata repository. The middle tier contains the OLAP server which presents multidimensional data to users from the data warehouse and data marts. The top tier contains the front-end tools like query, reporting and analysis tools that allow users to access and analyze the data.
This document provides an introduction to NoSQL databases. It discusses the history and limitations of relational databases that led to the development of NoSQL databases. The key motivations for NoSQL databases are that they can handle big data, provide better scalability and flexibility than relational databases. The document describes some core NoSQL concepts like the CAP theorem and different types of NoSQL databases like key-value, columnar, document and graph databases. It also outlines some remaining research challenges in the area of NoSQL databases.
OLAP (online analytical processing) allows users to easily extract and view data from different perspectives. It was invented by Edgar Codd in the 1980s and uses multidimensional data structures called cubes to store and analyze data. OLAP utilizes either a multidimensional (MOLAP), relational (ROLAP), or hybrid (HOLAP) approach to store cube data in databases and provide interactive analysis of data.
SDN( Software Defined Network) and NFV(Network Function Virtualization) for I...Sagar Rai
油
Software, Software Defined Network, Network Function Virtualization, SDN, NFV, Internet of things, Basics of Internet of things, Network Basics, Virtualization, Limitation of Conventional Network, Open flow, Basics of conventional network,
The document discusses major issues in data mining including mining methodology, user interaction, performance, and data types. Specifically, it outlines challenges of mining different types of knowledge, interactive mining at multiple levels of abstraction, incorporating background knowledge, visualization of results, handling noisy data, evaluating pattern interestingness, efficiency and scalability of algorithms, parallel and distributed mining, and handling relational and complex data types from heterogeneous databases.
Hadoop MapReduce is an open source framework for distributed processing of large datasets across clusters of computers. It allows parallel processing of large datasets by dividing the work across nodes. The framework handles scheduling, fault tolerance, and distribution of work. MapReduce consists of two main phases - the map phase where the data is processed key-value pairs and the reduce phase where the outputs of the map phase are aggregated together. It provides an easy programming model for developers to write distributed applications for large scale processing of structured and unstructured data.
This document provides an overview of data warehousing concepts including dimensional modeling, online analytical processing (OLAP), and indexing techniques. It discusses the evolution of data warehousing, definitions of data warehouses, architectures, and common applications. Dimensional modeling concepts such as star schemas, snowflake schemas, and slowly changing dimensions are explained. The presentation concludes with references for further reading.
Hive is a data warehouse infrastructure tool that allows users to query and analyze large datasets stored in Hadoop. It uses a SQL-like language called HiveQL to process structured data stored in HDFS. Hive stores metadata about the schema in a database and processes data into HDFS. It provides a familiar interface for querying large datasets using SQL-like queries and scales easily to large datasets.
The document compares NoSQL and SQL databases. It notes that NoSQL databases are non-relational and have dynamic schemas that can accommodate unstructured data, while SQL databases are relational and have strict, predefined schemas. NoSQL databases offer more flexibility in data structure, but SQL databases provide better support for transactions and data integrity. The document also discusses differences in queries, scaling, and consistency between the two database types.
This document discusses different types of mainframe systems, beginning with batch systems where users submit jobs offline and jobs are run sequentially in batches. It then describes multiprogrammed systems which allow multiple jobs to reside in memory simultaneously, improving CPU utilization. Finally, it covers time-sharing systems which enable interactive use by multiple users at once through very fast switching between programs, minimizing response time. The key difference between multiprogrammed and time-sharing systems is the prioritization of maximizing CPU usage versus minimizing response time respectively.
The document discusses UML diagrams including state diagrams and activity diagrams. For state diagrams, it describes how they model the dynamic aspects and states of an object over time through states, transitions, and events. The key elements of a state diagram like states, transitions, and events are defined. It also provides an example state diagram. For activity diagrams, it describes how they model the flow of activities in a system through activity nodes and edges. The basic components of an activity diagram like activities, forks, joins, and decisions are outlined. It concludes with the main uses of activity diagrams to model workflows and business requirements.
The document discusses 3-tier architecture, which separates an application into three logical layers - the presentation layer, business logic layer, and data layer. It describes how this architecture evolved from earlier single-tier and dual-tier models. The 3-tier model provides advantages like scalability, reusability, security, and availability. While more complex than 2-tier, it allows for improved performance in medium to large systems by distributing the application across servers. An example is provided of implementing a 3-tier application using ASP.NET with distinct presentation, business, and data access layers.
Hadoop YARN is a specific component of the open source Hadoop platform for big data analytics.
YARN stands for Yet Another Resource Negotiator. YARN was introduced to make the most out of HDFS.
Job scheduling is also handled by YARN.
The waterfall model segments the software development process into sequential phases: planning, requirements definition, design, implementation, system testing, and maintenance. Each phase has defined inputs, processes, and outputs. The planning phase involves understanding the problem, feasibility studies, and developing a solution. Requirements definition produces a specification describing the required software functions and constraints. Design identifies software components and their relationships. Implementation translates the design into code. System testing integrates and accepts the software. Maintenance modifies the software after release. While the phases are linear, the development process is not always perfectly sequential.
The document provides an overview of the GoF design patterns including the types (creational, structural, behavioral), examples of common patterns like factory, abstract factory, singleton, adapter, facade, iterator, observer, and strategy, and UML diagrams illustrating how the patterns can be implemented. Key aspects of each pattern like intent, relationships, pros and cons are discussed at a high level. The document is intended to introduce software engineers to design patterns and how they can be applied.
This document discusses functional and non-functional requirements. Functional requirements describe the behavior of a system and support user goals, while non-functional requirements describe how the system works and make it more usable. Functional requirements should include data descriptions, screen operations, workflows, and access controls. Non-functional requirements should cover usability, reliability, performance, and supportability. Non-functional requirements are further classified into categories like process, delivery, implementation, and external constraints.
OLTP systems emphasize short, frequent transactions with a focus on data integrity and query speed. OLAP systems handle fewer but more complex queries involving data aggregation. OLTP uses a normalized schema for transactional data while OLAP uses a multidimensional schema for aggregated historical data. A data warehouse stores a copy of transaction data from operational systems structured for querying and reporting, and is used for knowledge discovery, consolidated reporting, and data mining. It differs from operational systems in being subject-oriented, larger in size, containing historical rather than current data, and optimized for complex queries rather than transactions.
This document discusses NoSQL and the CAP theorem. It begins with an introduction of the presenter and an overview of topics to be covered: What is NoSQL and the CAP theorem. It then defines NoSQL, provides examples of major NoSQL categories (document, graph, key-value, and wide-column stores), and explains why NoSQL is used, including to handle large, dynamic, and distributed data. The document also explains the CAP theorem, which states that a distributed data store can only satisfy two of three properties: consistency, availability, and partition tolerance. It provides examples of how to choose availability over consistency or vice versa. Finally, it concludes that both SQL and NoSQL have valid use cases and a combination
The document provides an overview of data warehousing, decision support, online analytical processing (OLAP), and data mining. It discusses what data warehousing is, how it can help organizations make better decisions by integrating data from various sources and making it available for analysis. It also describes OLAP as a way to transform warehouse data into meaningful information for interactive analysis, and lists some common OLAP operations like roll-up, drill-down, slice and dice, and pivot. Finally, it gives a brief introduction to data mining as the process of extracting patterns and relationships from data.
This document provides an overview of the 3-tier data warehouse architecture. It discusses the three tiers: the bottom tier contains the data warehouse server which fetches relevant data from various data sources and loads it into the data warehouse using backend tools for extraction, cleaning, transformation and loading. The bottom tier also contains the data marts and metadata repository. The middle tier contains the OLAP server which presents multidimensional data to users from the data warehouse and data marts. The top tier contains the front-end tools like query, reporting and analysis tools that allow users to access and analyze the data.
This document provides an introduction to NoSQL databases. It discusses the history and limitations of relational databases that led to the development of NoSQL databases. The key motivations for NoSQL databases are that they can handle big data, provide better scalability and flexibility than relational databases. The document describes some core NoSQL concepts like the CAP theorem and different types of NoSQL databases like key-value, columnar, document and graph databases. It also outlines some remaining research challenges in the area of NoSQL databases.
OLAP (online analytical processing) allows users to easily extract and view data from different perspectives. It was invented by Edgar Codd in the 1980s and uses multidimensional data structures called cubes to store and analyze data. OLAP utilizes either a multidimensional (MOLAP), relational (ROLAP), or hybrid (HOLAP) approach to store cube data in databases and provide interactive analysis of data.
SDN( Software Defined Network) and NFV(Network Function Virtualization) for I...Sagar Rai
油
Software, Software Defined Network, Network Function Virtualization, SDN, NFV, Internet of things, Basics of Internet of things, Network Basics, Virtualization, Limitation of Conventional Network, Open flow, Basics of conventional network,
The document discusses major issues in data mining including mining methodology, user interaction, performance, and data types. Specifically, it outlines challenges of mining different types of knowledge, interactive mining at multiple levels of abstraction, incorporating background knowledge, visualization of results, handling noisy data, evaluating pattern interestingness, efficiency and scalability of algorithms, parallel and distributed mining, and handling relational and complex data types from heterogeneous databases.
Hadoop MapReduce is an open source framework for distributed processing of large datasets across clusters of computers. It allows parallel processing of large datasets by dividing the work across nodes. The framework handles scheduling, fault tolerance, and distribution of work. MapReduce consists of two main phases - the map phase where the data is processed key-value pairs and the reduce phase where the outputs of the map phase are aggregated together. It provides an easy programming model for developers to write distributed applications for large scale processing of structured and unstructured data.
This document provides an overview of data warehousing concepts including dimensional modeling, online analytical processing (OLAP), and indexing techniques. It discusses the evolution of data warehousing, definitions of data warehouses, architectures, and common applications. Dimensional modeling concepts such as star schemas, snowflake schemas, and slowly changing dimensions are explained. The presentation concludes with references for further reading.
Hive is a data warehouse infrastructure tool that allows users to query and analyze large datasets stored in Hadoop. It uses a SQL-like language called HiveQL to process structured data stored in HDFS. Hive stores metadata about the schema in a database and processes data into HDFS. It provides a familiar interface for querying large datasets using SQL-like queries and scales easily to large datasets.
The document compares NoSQL and SQL databases. It notes that NoSQL databases are non-relational and have dynamic schemas that can accommodate unstructured data, while SQL databases are relational and have strict, predefined schemas. NoSQL databases offer more flexibility in data structure, but SQL databases provide better support for transactions and data integrity. The document also discusses differences in queries, scaling, and consistency between the two database types.
This document discusses different types of mainframe systems, beginning with batch systems where users submit jobs offline and jobs are run sequentially in batches. It then describes multiprogrammed systems which allow multiple jobs to reside in memory simultaneously, improving CPU utilization. Finally, it covers time-sharing systems which enable interactive use by multiple users at once through very fast switching between programs, minimizing response time. The key difference between multiprogrammed and time-sharing systems is the prioritization of maximizing CPU usage versus minimizing response time respectively.
The document discusses UML diagrams including state diagrams and activity diagrams. For state diagrams, it describes how they model the dynamic aspects and states of an object over time through states, transitions, and events. The key elements of a state diagram like states, transitions, and events are defined. It also provides an example state diagram. For activity diagrams, it describes how they model the flow of activities in a system through activity nodes and edges. The basic components of an activity diagram like activities, forks, joins, and decisions are outlined. It concludes with the main uses of activity diagrams to model workflows and business requirements.
The document discusses 3-tier architecture, which separates an application into three logical layers - the presentation layer, business logic layer, and data layer. It describes how this architecture evolved from earlier single-tier and dual-tier models. The 3-tier model provides advantages like scalability, reusability, security, and availability. While more complex than 2-tier, it allows for improved performance in medium to large systems by distributing the application across servers. An example is provided of implementing a 3-tier application using ASP.NET with distinct presentation, business, and data access layers.
Hadoop YARN is a specific component of the open source Hadoop platform for big data analytics.
YARN stands for Yet Another Resource Negotiator. YARN was introduced to make the most out of HDFS.
Job scheduling is also handled by YARN.
The waterfall model segments the software development process into sequential phases: planning, requirements definition, design, implementation, system testing, and maintenance. Each phase has defined inputs, processes, and outputs. The planning phase involves understanding the problem, feasibility studies, and developing a solution. Requirements definition produces a specification describing the required software functions and constraints. Design identifies software components and their relationships. Implementation translates the design into code. System testing integrates and accepts the software. Maintenance modifies the software after release. While the phases are linear, the development process is not always perfectly sequential.
The document provides an overview of the GoF design patterns including the types (creational, structural, behavioral), examples of common patterns like factory, abstract factory, singleton, adapter, facade, iterator, observer, and strategy, and UML diagrams illustrating how the patterns can be implemented. Key aspects of each pattern like intent, relationships, pros and cons are discussed at a high level. The document is intended to introduce software engineers to design patterns and how they can be applied.
This document discusses functional and non-functional requirements. Functional requirements describe the behavior of a system and support user goals, while non-functional requirements describe how the system works and make it more usable. Functional requirements should include data descriptions, screen operations, workflows, and access controls. Non-functional requirements should cover usability, reliability, performance, and supportability. Non-functional requirements are further classified into categories like process, delivery, implementation, and external constraints.
OLTP systems emphasize short, frequent transactions with a focus on data integrity and query speed. OLAP systems handle fewer but more complex queries involving data aggregation. OLTP uses a normalized schema for transactional data while OLAP uses a multidimensional schema for aggregated historical data. A data warehouse stores a copy of transaction data from operational systems structured for querying and reporting, and is used for knowledge discovery, consolidated reporting, and data mining. It differs from operational systems in being subject-oriented, larger in size, containing historical rather than current data, and optimized for complex queries rather than transactions.
This document discusses NoSQL and the CAP theorem. It begins with an introduction of the presenter and an overview of topics to be covered: What is NoSQL and the CAP theorem. It then defines NoSQL, provides examples of major NoSQL categories (document, graph, key-value, and wide-column stores), and explains why NoSQL is used, including to handle large, dynamic, and distributed data. The document also explains the CAP theorem, which states that a distributed data store can only satisfy two of three properties: consistency, availability, and partition tolerance. It provides examples of how to choose availability over consistency or vice versa. Finally, it concludes that both SQL and NoSQL have valid use cases and a combination
The document provides an overview of data warehousing, decision support, online analytical processing (OLAP), and data mining. It discusses what data warehousing is, how it can help organizations make better decisions by integrating data from various sources and making it available for analysis. It also describes OLAP as a way to transform warehouse data into meaningful information for interactive analysis, and lists some common OLAP operations like roll-up, drill-down, slice and dice, and pivot. Finally, it gives a brief introduction to data mining as the process of extracting patterns and relationships from data.
The document discusses advances in database querying and summarizes key topics including data warehousing, online analytical processing (OLAP), and data mining. It describes how data warehouses integrate data from various sources to enable decision making, and how OLAP tools allow users to analyze aggregated data and model "what-if" scenarios. The document also covers data transformation techniques used to build the data warehouse.
The document provides an overview of data warehousing and data mining. It discusses what a data warehouse is, how it is structured, and how it can help organizations make better decisions by integrating data from multiple sources and facilitating online analytical processing (OLAP). It also covers key components of a data warehousing architecture like the data manager, data acquisition, metadata repository, and middleware that connect the data warehouse to operational databases and analytical tools.
The document discusses OLAP cubes and data warehousing. It defines OLAP as online analytical processing used to analyze aggregated data in data warehouses. Key concepts covered include star schemas, dimensions and facts, cube operations like roll-up and drill-down, and different OLAP architectures like MOLAP and ROLAP that use multidimensional or relational storage respectively.
As a follow-on to the presentation "Building an Effective Data Warehouse Architecture", this presentation will explain exactly what Big Data is and its benefits, including use cases. We will discuss how Hadoop, the cloud and massively parallel processing (MPP) is changing the way data warehouses are being built. We will talk about hybrid architectures that combine on-premise data with data in the cloud as well as relational data and non-relational (unstructured) data. We will look at the benefits of MPP over SMP and how to integrate data from Internet of Things (IoT) devices. You will learn what a modern data warehouse should look like and how the role of a Data Lake and Hadoop fit in. In the end you will have guidance on the best solution for your data warehouse going forward.
The document provides an overview of key concepts in data warehousing and business intelligence, including:
1) It defines data warehousing concepts such as the characteristics of a data warehouse (subject-oriented, integrated, time-variant, non-volatile), grain/granularity, and the differences between OLTP and data warehouse systems.
2) It discusses the evolution of business intelligence and key components of a data warehouse such as the source systems, staging area, presentation area, and access tools.
3) It covers dimensional modeling concepts like star schemas, snowflake schemas, and slowly and rapidly changing dimensions.
This document discusses the components and architecture of a data warehouse. It describes the major components as the source data component, data staging component, information delivery component, metadata component, and management/control component. It then discusses each of these components in more detail, specifically covering source data types, the extract-transform-load process in data staging, the data storage repository, and authentication/monitoring in information delivery. Dimensional modeling is also introduced as the preferred approach for data warehouse design compared to entity-relationship modeling.
Introduction to Data Warehouse. Summarized from the first chapter of 'The Data Warehouse Lifecyle Toolkit : Expert Methods for Designing, Developing, and Deploying Data Warehouses' by Ralph Kimball
Types of database processing,OLTP VS Data Warehouses(OLAP), Subject-oriented
Integrated
Time-variant
Non-volatile,
Functionalities of Data Warehouse,Roll-Up(Consolidation),
Drill-down,
Slicing,
Dicing,
Pivot,
KDD Process,Application of Data Mining
The document provides an overview of SAP BI training. It discusses that SAP stands for Systems Applications and Products in Data Processing and was founded in 1972 in Germany. It is the world's fourth largest software provider and largest provider of ERP software. The training covers topics such as the 3-tier architecture of SAP, data warehousing, ETL, the SAP BI architecture and key components, OLTP vs OLAP, business intelligence definition, and the ASAP methodology for SAP implementations.
The document discusses the need for data warehousing and provides examples of how data warehousing can help companies analyze data from multiple sources to help with decision making. It describes common data warehouse architectures like star schemas and snowflake schemas. It also outlines the process of building a data warehouse, including data selection, preprocessing, transformation, integration and loading. Finally, it discusses some advantages and disadvantages of data warehousing.
The document provides an overview of key data warehousing concepts. It defines a data warehouse as a single, consistent store of data obtained from various sources and made available to users in a format they can understand for business decision making. The document outlines some common questions end users may have that a data warehouse can help answer. It also discusses the differences between online transaction processing (OLTP) systems and data warehouses, including that data warehouses integrate historical data from various sources and are optimized for analysis rather than transactions.
A data warehouse is a central repository of historical data from an organization's various sources designed for analysis and reporting. It contains integrated data from multiple systems optimized for querying and analysis rather than transactions. Data is extracted, cleaned, and loaded from operational sources into the data warehouse periodically. The data warehouse uses a dimensional model to organize data into facts and dimensions for intuitive analysis and is optimized for reporting rather than transaction processing like operational databases. Data warehousing emerged to meet the growing demand for analysis that operational systems could not support due to impacts on performance and limitations in reporting capabilities.
This document provides an agenda and overview for a data warehousing training session. The agenda covers topics such as data warehouse introductions, reviewing relational database management systems and SQL commands, and includes a case study discussion with Q&A. Background information is also provided on the project manager leading the training.
1. Storage challenges - The exponentially growing volumes of data can overwhelm traditional storage systems and databases.
2. Processing challenges - Analyzing large and diverse datasets in a timely manner requires massively parallel processing across thousands of CPU cores.
3. Skill challenges - There is a shortage of data scientists and engineers with the skills needed to unlock insights from big data. Traditional IT skills are insufficient.
The document outlines the agenda for a data warehousing training course. The agenda covers topics such as data warehouse structure and modeling, extract transform load (ETL) processes, dimensional modeling, aggregation, online analytical processing (OLAP), and data marts. Time is allocated to discuss loading, refreshing, and querying the data warehouse.
I often hear from clients: We dont know much about Big Data can you tell us what it is and how it can help our business?油 Yes!油 The first step is this vendor-free presentation, where I start with a business level discussion, not a technical one.油 Big Data is an opportunity to re-imagine our world, to track new signals that were once impossible, to change the way we experience our communities, our places of work and our personal lives.油 I will help you to identify the business value opportunity from Big Data and how to operationalize it.油 Yes, we will cover the buzz words: modern data warehouse, Hadoop, cloud, MPP, Internet of Things, and Data Lake, but I will show use cases to better understand them.油 In the end, I will give you the ammo to go to your manager and say We need Big Data an here is why!油 Because if you are not utilizing Big Data to help you make better business decisions, you can bet your competitors are.
This document provides an overview of Oracle's Information Management Reference Architecture. It includes a conceptual view of the main architectural components, several design patterns for implementing different types of information management solutions, a logical view of the components in an information management system, and descriptions of how data flows through ingestion, interpretation, and different data layers.
Prelims of Kaun TALHA : a Travel, Architecture, Lifestyle, Heritage and Activism quiz, organized by Conquiztadors, the Quiz society of Sri Venkateswara College under their annual quizzing fest El Dorado 2025.
Digital Tools with AI for e-Content Development.pptxDr. Sarita Anand
油
This ppt is useful for not only for B.Ed., M.Ed., M.A. (Education) or any other PG level students or Ph.D. scholars but also for the school, college and university teachers who are interested to prepare an e-content with AI for their students and others.
How to Configure Flexible Working Schedule in Odoo 18 EmployeeCeline George
油
In this slide, well discuss on how to configure flexible working schedule in Odoo 18 Employee module. In Odoo 18, the Employee module offers powerful tools to configure and manage flexible working schedules tailored to your organization's needs.
Finals of Kaun TALHA : a Travel, Architecture, Lifestyle, Heritage and Activism quiz, organized by Conquiztadors, the Quiz society of Sri Venkateswara College under their annual quizzing fest El Dorado 2025.
How to Configure Restaurants in Odoo 17 Point of SaleCeline George
油
Odoo, a versatile and integrated business management software, excels with its robust Point of Sale (POS) module. This guide delves into the intricacies of configuring restaurants in Odoo 17 POS, unlocking numerous possibilities for streamlined operations and enhanced customer experiences.
APM event hosted by the South Wales and West of England Network (SWWE Network)
Speaker: Aalok Sonawala
The SWWE Regional Network were very pleased to welcome Aalok Sonawala, Head of PMO, National Programmes, Rider Levett Bucknall on 26 February, to BAWA for our first face to face event of 2025. Aalok is a member of APMs Thames Valley Regional Network and also speaks to members of APMs PMO Interest Network, which aims to facilitate collaboration and learning, offer unbiased advice and guidance.
Tonight, Aalok planned to discuss the importance of a PMO within project-based organisations, the different types of PMO and their key elements, PMO governance and centres of excellence.
PMOs within an organisation can be centralised, hub and spoke with a central PMO with satellite PMOs globally, or embedded within projects. The appropriate structure will be determined by the specific business needs of the organisation. The PMO sits above PM delivery and the supply chain delivery teams.
For further information about the event please click here.
How to attach file using upload button Odoo 18Celine George
油
In this slide, well discuss on how to attach file using upload button Odoo 18. Odoo features a dedicated model, 'ir.attachments,' designed for storing attachments submitted by end users. We can see the process of utilizing the 'ir.attachments' model to enable file uploads through web forms in this slide.
Database population in Odoo 18 - Odoo slidesCeline George
油
In this slide, well discuss the database population in Odoo 18. In Odoo, performance analysis of the source code is more important. Database population is one of the methods used to analyze the performance of our code.
APM People Interest Network Conference 2025
-Autonomy, Teams and Tension: Projects under stress
-Tim Lyons
-The neurological levels of
team-working: Harmony and tensions
With a background in projects spanning more than 40 years, Tim Lyons specialised in the delivery of large, complex, multi-disciplinary programmes for clients including Crossrail, Network Rail, ExxonMobil, Siemens and in patent development. His first career was in broadcasting, where he designed and built commercial radio station studios in Manchester, Cardiff and Bristol, also working as a presenter and programme producer. Tim now writes and presents extensively on matters relating to the human and neurological aspects of projects, including communication, ethics and coaching. He holds a Masters degree in NLP, is an NLP Master Practitioner and International Coach. He is the Deputy Lead for APMs People Interest Network.
Session | The Neurological Levels of Team-working: Harmony and Tensions
Understanding how teams really work at conscious and unconscious levels is critical to a harmonious workplace. This session uncovers what those levels are, how to use them to detect and avoid tensions and how to smooth the management of change by checking you have considered all of them.
How to Modify Existing Web Pages in Odoo 18Celine George
油
In this slide, well discuss on how to modify existing web pages in Odoo 18. Web pages in Odoo 18 can also gather user data through user-friendly forms, encourage interaction through engaging features.
How to Modify Existing Web Pages in Odoo 18Celine George
油
What is OLAP -Data Warehouse Concepts - IT Online Training @ Newyorksys
1. What Is OLAP? Online Analytical Processing - coined by EF Codd in 1994 paper contracted by Arbor Software * Generally synonymous with earlier terms such as Decisions Support, Business Intelligence, Executive Information System OLAP = Multidimensional Database MOLAP: Multidimensional OLAP (Arbor Essbase, Oracle Express) ROLAP: Relational OLAP (Informix MetaCube, Microstrategy DSS Agent)
3. The OLAP Market Rapid growth in the enterprise market 1995: $700 Million 1997: $2.1 Billion Significant consolidation activity among major DBMS vendors 10/94: Sybase acquires ExpressWay 7/95: Oracle acquires Express 11/95: Informix acquires Metacube 1/97: Arbor partners up with IBM 10/96: Microsoft acquires Panorama Result: OLAP shifted from small vertical niche to mainstream DBMS category
4. Strengths of OLAP It is a powerful visualization paradigm It provides fast, interactive response times It is good for analyzing time series It can be useful to find some clusters and outliers Many vendors offer OLAP tools
5. OLAP Is FASMI Fast Analysis Shared Multidimensional Information
6. Multi-dimensional Data HeyI sold $100M worth of goods Dimensions: Product, Region, Time Hierarchical summarization paths Product Region Time Industry Country Year Category Region Quarter Product City Month Week Office Day Month 1 2 3 4 7 6 5 Product Toothpaste Juice Cola Milk Cream Soap Region W S N
7. Data Cube Lattice Cube lattice ABC AB AC BC A B C none Can materialize some groupbys, compute others on demand Question: which groupbys to materialze? Question: what indices to create Question: how to organize data (chunks, etc)
8. A Visual Operation: Pivot (Rotate) 10 47 30 12 Juice Cola Milk Cream NY LA SF 3/1 3/2 3/3 3/4 Date Month Region Product
9. Slicing and Dicing Product Sales Channel Regions Retail Direct Special Household Telecomm Video Audio India Far East Europe The Telecomm Slice
10. Roll-up and Drill Down Sales Channel Region Country State Location Address Sales Representative Roll Up Higher Level of Aggregation Low-level Details Drill-Down
11. Nature of OLAP Analysis Aggregation -- (total sales, percent-to-total) Comparison -- Budget vs. Expenses Ranking -- Top 10, quartile analysis Access to detailed and aggregate data Complex criteria specification Visualization
12. Organizationally Structured Data Different Departments look at the same detailed data in different ways. Without the detailed, organizationally structured data as a foundation, there is no reconcilability of data marketing manufacturing sales finance
13. Multidimensional Spreadsheets Analysts need spreadsheets that support pivot tables (cross-tabs) drill-down and roll-up slice and dice sort selections derived attributes Popular in retail domain
14. OLAP - Data Cube Idea: analysts need to group data in many different ways eg. Sales(region, product, prodtype, prodstyle, date, saleamount) saleamount is a measure attribute, rest are dimension attributes groupby every subset of the other attributes materialize (precompute and store) groupbys to give online response Also: hierarchies on attributes: date -> weekday, date -> month -> quarter -> year
15. SQL Extensions Front-end tools require Extended Family of Aggregate Functions rank, median, mode Reporting Features running totals, cumulative totals Results of multiple group by total sales by month and total sales by product Data Cube
16. Relational OLAP: 3 Tier DSS Store atomic data in industry standard RDBMS. Generate SQL execution plans in the ROLAP engine to obtain OLAP functionality. Obtain multi-dimensional reports from the DSS Client. Data Warehouse ROLAP Engine Decision Support Client Database Layer Application Logic Layer Presentation Layer
17. MD-OLAP: 2 Tier DSS MDDB Engine MDDB Engine Decision Support Client Database Layer Application Logic Layer Presentation Layer Store atomic data in a proprietary data structure (MDDB), pre-calculate as many outcomes as possible, obtain OLAP functionality via proprietary algorithms running against this data. Obtain multi-dimensional reports from the DSS Client.
18. Typical OLAP Problems Data Explosion Data Explosion Syndrome Number of Dimensions Number of Aggregations (4 levels in each dimension) Microsoft TechEd98
19. Metadata Repository Administrative metadata source databases and their contents gateway descriptions warehouse schema, view & derived data definitions dimensions, hierarchies pre-defined queries and reports data mart locations and contents data partitions data extraction, cleansing, transformation rules, defaults data refresh and purging rules user profiles, user groups security: user authorization, access control
20. Metdata Repository .. 2 Business data business terms and definitions ownership of data charging policies operational metadata data lineage: history of migrated data and sequence of transformations applied currency of data: active, archived, purged monitoring information: warehouse usage statistics, error reports, audit trails.
22. For a Successful Warehouse From day one establish that warehousing is a joint user/builder project Establish that maintaining data quality will be an ONGOING joint user/builder responsibility Train the users one step at a time Consider doing a high level corporate data model in no more than three weeks
23. For a Successful Warehouse Look closely at the data extracting, cleaning, and loading tools Implement a user accessible automated directory to information stored in the warehouse Determine a plan to test the integrity of the data in the warehouse From the start get warehouse users in the habit of 'testing' complex queries
24. For a Successful Warehouse Coordinate system roll-out with network administration personnel When in a bind, ask others who have done the same thing for advice Be on the lookout for small, but strategic, projects Market and sell your data warehousing systems
25. Data Warehouse Pitfalls You are going to spend much time extracting, cleaning, and loading data Despite best efforts at project management, data warehousing project scope will increase You are going to find problems with systems feeding the data warehouse You will find the need to store data not being captured by any existing system You will need to validate data not being validated by transaction processing systems
26. Data Warehouse Pitfalls Some transaction processing systems feeding the warehousing system will not contain detail Many warehouse end users will be trained and never or seldom apply their training After end users receive query and report tools, requests for IS written reports may increase Your warehouse users will develop conflicting business rules Large scale data warehousing can become an exercise in data homogenizing
27. Data Warehouse Pitfalls 'Overhead' can eat up great amounts of disk space The time it takes to load the warehouse will expand to the amount of the time in the available window... and then some Assigning security cannot be done with a transaction processing system mindset You are building a HIGH maintenance system You will fail if you concentrate on resource optimization to the neglect of project, data, and customer management issues and an understanding of what adds value to the customer
28. DW and OLAP Research Issues Data cleaning focus on data inconsistencies, not schema differences data mining techniques Physical Design design of summary tables, partitions, indexes tradeoffs in use of different indexes Query processing selecting appropriate summary tables dynamic optimization with feedback acid test for query optimization: cost estimation, use of transformations, search strategies partitioning query processing between OLAP server and backend server.
29. DW and OLAP Research Issues .. 2 Warehouse Management detecting runaway queries resource management incremental refresh techniques computing summary tables during load failure recovery during load and refresh process management: scheduling queries, load and refresh Query processing, caching use of workflow technology for process management
30. DATA WAREHOUSING TECHNOLOGIES: Informatica油 Hyperion Essbase油 Hyperion Interactive Reports油 Cognos油 Ab Initio MSBI油 Data stage 8油 MicroStrategy油 Cognos Planning and Budgeting油 Cognos TM1油 Business Objects油 BODI