This document discusses improving the feature development process for the open-source PyQtGraph library. It first provides background on PyQtGraph and open-source development practices. It then analyzes PyQtGraph's current process through case studies and identifies areas for improvement, such as needing a collaboration tool to better coordinate contributions. The document proposes extending the "Pirate Metrics" framework to better measure community interactions during feature development. The conclusions note this study could help developers, maintainers, and users better understand open-source processes.
This document describes a project called Git-Influencer that aims to discover influential GitHub users by language. It does this by mapping users to languages they contribute to, building networks for each language, and running PageRank algorithms on these networks to score user importance. Several challenges are outlined such as dealing with data volume, accounting for inactive users, and handling users with no followers. Potential improvements discussed include using different data storage, classification metrics, and graph algorithms.
Experienced Software Engineer with a demonstrated history of working in the computer software industry. Skilled in Java, Python, Javascript(ES6), Flask, React and Databases. Strong engineering professional with B.Tech in Computer Engineering from Thapar University, Patiala.
FME-Based Tool for Automatic Updating of Geographical Git Repositories (Pushi...Safe Software
油
Safe Software's Ken Bragg discusses a project that uses FME and Git to create an open data repository of GeoJSON files on Github that also serves as a collaborative mapping framework.
The document summarizes the progress of a team developing an RSS reader Android application. It groups coding and testing under the same task. It adds research papers referenced. New Gantt and Pert charts are included with proposed task names. The team added internet permissions, optimized code, and enabled reading Atom feeds. Their agenda for the next month is to implement a content panel, add a database instead of file system, and implement localization.
SRDS2019: Abeona: an Architecture for Energy-Aware Task Migrations from the E...LEGATO project
油
This paper presents our preliminary results with ABEONA, an edge-to-cloud architecture that allows migrating tasks from low-energy, resource-constrained devices on the edge up to the cloud. Our preliminary results on artificial and real world datasets show that it is possible to execute workloads in a more efficient manner energy-wise by scaling horizontally at the edge, without negatively affecting the execution runtime.
Big Data LDN 2018: ENABLING DATA-DRIVEN DECISIONS WITH AUTOMATED INSIGHTSMatt Stubbs
油
Date: 13th November 2018
Location: Customer Experience Theatre
Time: 11:50 - 12:20
Speaker: Charlotte Emms
Organisation: seenit
About: How do you get your colleagues interested in the power of data? Taking you through Seenits journey using Couchbase's NoSQL database to create a regular, fully automated update in an easily digestible format.
This document discusses forecasting the number of daily issues on GitHub repositories to help foundations efficiently manage resources. The authors collected daily issue and commit data from 5 large GitHub repositories over the past year. They used time series forecasting techniques to forecast the number of daily issues for each repository over the next 3 weeks. By evaluating forecast accuracy, they selected the best technique for each repository and generated ensemble forecasts. On average, their forecasts were 16% more accurate than the benchmark. This approach could help foundations allocate programmers in advance to repositories expected to have higher issue volumes.
Data Sharing, Distribution and Updating Using Social Coding Community Github ...Universit辰t Salzburg
油
This document provides a summary of a presentation about using GitHub and LATEX for graduate research projects. It discusses the benefits of GitHub for collaborative work and version control. It also highlights some advantages of LATEX over traditional text editors for writing theses. The presentation includes steps for creating a personal GitHub repository and maintaining a project. It provides an example of using GitHub and LATEX for an MSc thesis on seagrass mapping. Overall, the presentation aims to demonstrate how these tools can facilitate writing, editing, and managing research projects in an academic setting.
ChatGPT
The Big Data projects course includes five projects:
Data Engineering with PDF Summary Tool: Create a Streamlit app to summarize PDFs, comparing nougat and PyPDF libraries, and integrate architectural diagrams.
Large Language Models for SEC Document Summarization: Develop a tool for summarizing PDF documents, evaluating different libraries, and creating Jupyter notebooks and APIs for Streamlit integration.
Document Summarization with LLMs and RAG: Focus on automating embedding creation, data processing, and developing a client-facing application with secure login and search functionalities.
Data Engineering with Snowpark Python: Reproduce data pipeline steps, analyze datasets, design architectural diagrams, and integrate Streamlit with OpenAI for SQL query generation using natural language.
Project Redesign and Rearchitecture: Review existing architecture and redesign using open-source components and enterprise alternatives, focusing on flexible, scalable, and cost-effective solutions.
1. The document describes a demonstration of the Readium JS software using the IMS Caliper sensor API and LTI integration to capture learning analytics data from a digital textbook.
2. The demo aims to capture events like page views, annotations, bookmarks and logins/logouts and transmit the data in JSON format to be stored and analyzed on a pseudo platform with MongoDB databases.
3. Next steps discussed are evaluating the developed code, contributing it to the Readium project on GitHub, revising the LTI integration, and supporting EPUB for Education standards in Readium. Guidelines for learning data interoperability in ISO/IEC JTC1 SC36 WG8 are also summarized.
Presentation of the paper "Primers or Reminders? The Effects of Existing Review Comments on Code Review" published at ICSE 2020.
Authors:
Davide Spadini, G端l Calikli, Alberto Bacchelli
Link to the paper: https://research.tudelft.nl/en/publications/primers-or-reminders-the-effects-of-existing-review-comments-on-c
Maruti Gollapudi has over 17 years of experience as a principal architect, specializing in digital customer experience. Some of his significant contributions include developing a data aggregation and analytics platform hosted on AWS that enables capabilities like social analytics, text analytics using NLP and machine learning, and enterprise search. He has experience building solutions leveraging technologies such as Java, JBoss, Kafka, MongoDB, Solr, Watson, and various analytics and social APIs. Recent projects include developing a headless CMS for page building and dynamic content modification for CNBC, and architecting a middleware for CNBC's integration with Uber to dynamically serve ride-related content.
London atlassian meetup 31 jan 2016 jira metrics-extract slidesRudiger Wolf
油
際際滷s for talk given to London Atlassian User Group Jan 2017. How to get started with Python to extract data from Jira and produce charts for your Agile team.
This document describes a project to analyze GitHub data and develop visualizations and recommendations. The project will have two parts: 1) A visualization part that analyzes metrics like programming languages used, active users, geographic distribution of users, and popular repositories, and 2) A recommendation system that suggests potential contributors or repositories for a given user based on their activity history. The project aims to provide insights into active areas on GitHub and which languages are most widely used. It will also help increase collaboration by recommending potential collaborators and interesting repositories for users. The document outlines the project timeline and division of labor across gathering requirements, design, implementation, and developing the user interface.
The document discusses the evolution of continuous integration and delivery workflows at Red Hat's Fabric8 project. It describes how the workflows have scaled from initially having 4 main Java repositories and 15 Jenkins jobs to now having over 80 repositories and 143 Jenkins jobs. The document outlines the key tools used in their workflows including Jenkins, Kubernetes, Kibana, Grafana and others and demonstrates how automation is applied across the entire software development lifecycle from app creation to continuous improvement. It encourages adopting similar practices to help deliver value to customers faster.
Git and GitHub are open source version control systems. Git is a decentralized version control system, while GitHub is a web-based hosting service for Git repositories that offers additional collaboration features. GitHub allows users to fork repositories to propose and contribute changes. Key features include wikis, task management, bug tracking, and pull requests to merge changes. GitHub is a powerful collaboration tool for software developers and other users due to its features for forking, pulling, and merging code changes.
This document outlines the objectives and content of a course on software development practices and web development. The course covers agile software development methods like Scrum, setting up GitHub repositories, developing static and dynamic web pages using HTML, CSS, and JavaScript, and implementing mini projects like online assessment or ticket reservation systems using these technologies. The course has 5 units covering agile development, Git and GitHub, HTML, CSS, JavaScript basics, and JavaScript objects. Students will learn to apply agile methods, create GitHub repositories, develop web pages with HTML, design pages with CSS, add interactivity with JavaScript, and handle events.
Research data spring: streamlining depositJisc RDM
油
The research data spring project "Streamlining deposit: an OJS to repository plugin" slides for the third sandpit workshop. Project led by Ernesto Priego of City University London.
Efficient GitHub Crawling using the GraphQL APIMatthias Trapp
油
This document discusses efficient crawling of GitHub data using the GraphQL API compared to traditional REST APIs. It presents the Prometheus system, which uses a microservices architecture and event-driven approach to split GraphQL queries and import response data into a database. An experiment shows the Prometheus system is over 3 times faster than an existing crawler when retrieving issues data from GitHub repositories. The document concludes the GraphQL API enables better performance for crawling but query structure also impacts efficiency.
Software development projects are notoriously complex and difficult to deal with. Several support tools such as issue tracking, code review and Source Control Management (SCM) systems have been introduced in the past decades to
ease development activities. While such tools efficiently track the evolution of a given aspect of the project (e.g., bug reports), they provide just a partial view of the project and often lack of advanced querying mechanisms limiting themselves to command line or simple GUI support. This is particularly true for projects that rely on Git, the most popular SCM system today.
In this paper, we propose a conceptual schema for Git and an approach that, given a Git repository, exports its data to a relational database in order to (1) promote data integration with other existing SCM tools and (2) enable writing queries on Git data using standard SQL syntax. To ensure efficiency, our approach comes with an incremental propagation mechanism that refreshes the database content with the latest modifications. We have implemented our approach in Gitana, an open-source tool available on GitHub (https://github.com/SOM-Research/Gitana).
Building Reactive Real-time Data PipelineTrieu Nguyen
油
Topic: Building reactive real-time data pipeline at FPT ?
1) What is Data Pipeline ?
2) Big Data Problems at FPT
+ VnExpress: pageview and heat-map
+ eClick: real-time reactive advertising
3) Solutions and Patterns
4) Fast Data Architecture at FPT
5) Wrap up
Crunching the numbers: Open Source Community Metrics at OSCONDawn Foster
油
Co-presented with Dave Neary at OSCON 2011.
Every community manager knows that community metrics are important, but how do you come up with a plan and figure out what you want to measure? Most community managers have their own set of hacky scripts for extracting data from various sources after they decide what metrics to track. There is no standardised Community Software Dashboard you can use to generate near-real-time stats on your community growth.
Like most open source projects, we have diverse community infrastructure for MeeGo, including Mailman, Drupal, Mediawiki, IRC, git, OpenSuse Build Service, Transifex and vBulletin. We wanted to unify these sources together, extract meaningful statistics from the data we had available to us, and present it to the user in a way that made it easy to see if the community was developing nicely or not.
Building on the work of Pentaho, Talend, MLStats, gitdm and a host of others, we built a generic and open source community dashboard for the MeeGo project, and integrated it into the website. The project was run in the open at http://wiki.meego.com/Metrics/Dashboard and all products of the project are available for reuse.
This presentation will cover the various metrics we wanted to measure, how we extracted the data from a diverse set of services to do it, and more importantly, how you can do it too.
Crunching the numbers: Open Source Community MetricsDawn Foster
油
Every community manager knows that community metrics are important, but how do you come up with a plan and figure out what you want to measure? Most community managers have their own set of hacky scripts for extracting data from various sources after they decide what metrics to track. There is no standardised Community Software Dashboard you can use to generate near-real-time stats on your community growth.
Like most open source projects, we have diverse community infrastructure for MeeGo, including Mailman, Drupal, Mediawiki, IRC, git, OpenSuse Build Service, Transifex and vBulletin. We wanted to unify these sources together, extract meaningful statistics from the data we had available to us, and present it to the user in a way that made it easy to see if the community was developing nicely or not.
Building on the work of Pentaho, Talend, MLStats, gitdm and a host of others, we built a generic and open source community dashboard for the MeeGo project, and integrated it into the website. The project was run in the open at http://wiki.meego.com/Metrics/Dashboard and all products of the project are available for reuse.
This presentation will cover the various metrics we wanted to measure, how we extracted the data from a diverse set of services to do it, and more importantly, how you can do it too.
Flex and rigid-flex printed circuit boards (PCBs) can be considered at the basic level some of the most complex PCBs in the industry. With that in mind, its incredibly easy to make a mistake, to leave something out, or to create a design that was doomed from the start.
Such design failures can end up leading to an eventual failure by delamination, short circuits, damage to the flex portions, and many other things. The easiest way to circumvent these is to start at the beginning, to design with preventing failure in mind rather than trying to fix existing designs to accommodate for problems.
In this webinar, we cover how to design flex and rigid-flex PCBs with failure prevention in mind to save time, money, and headaches, and what failure can look like.
For more information on our flex and rigid-flex PCB solutions, visit https://www.epectec.com/flex.
Big Data LDN 2018: ENABLING DATA-DRIVEN DECISIONS WITH AUTOMATED INSIGHTSMatt Stubbs
油
Date: 13th November 2018
Location: Customer Experience Theatre
Time: 11:50 - 12:20
Speaker: Charlotte Emms
Organisation: seenit
About: How do you get your colleagues interested in the power of data? Taking you through Seenits journey using Couchbase's NoSQL database to create a regular, fully automated update in an easily digestible format.
This document discusses forecasting the number of daily issues on GitHub repositories to help foundations efficiently manage resources. The authors collected daily issue and commit data from 5 large GitHub repositories over the past year. They used time series forecasting techniques to forecast the number of daily issues for each repository over the next 3 weeks. By evaluating forecast accuracy, they selected the best technique for each repository and generated ensemble forecasts. On average, their forecasts were 16% more accurate than the benchmark. This approach could help foundations allocate programmers in advance to repositories expected to have higher issue volumes.
Data Sharing, Distribution and Updating Using Social Coding Community Github ...Universit辰t Salzburg
油
This document provides a summary of a presentation about using GitHub and LATEX for graduate research projects. It discusses the benefits of GitHub for collaborative work and version control. It also highlights some advantages of LATEX over traditional text editors for writing theses. The presentation includes steps for creating a personal GitHub repository and maintaining a project. It provides an example of using GitHub and LATEX for an MSc thesis on seagrass mapping. Overall, the presentation aims to demonstrate how these tools can facilitate writing, editing, and managing research projects in an academic setting.
ChatGPT
The Big Data projects course includes five projects:
Data Engineering with PDF Summary Tool: Create a Streamlit app to summarize PDFs, comparing nougat and PyPDF libraries, and integrate architectural diagrams.
Large Language Models for SEC Document Summarization: Develop a tool for summarizing PDF documents, evaluating different libraries, and creating Jupyter notebooks and APIs for Streamlit integration.
Document Summarization with LLMs and RAG: Focus on automating embedding creation, data processing, and developing a client-facing application with secure login and search functionalities.
Data Engineering with Snowpark Python: Reproduce data pipeline steps, analyze datasets, design architectural diagrams, and integrate Streamlit with OpenAI for SQL query generation using natural language.
Project Redesign and Rearchitecture: Review existing architecture and redesign using open-source components and enterprise alternatives, focusing on flexible, scalable, and cost-effective solutions.
1. The document describes a demonstration of the Readium JS software using the IMS Caliper sensor API and LTI integration to capture learning analytics data from a digital textbook.
2. The demo aims to capture events like page views, annotations, bookmarks and logins/logouts and transmit the data in JSON format to be stored and analyzed on a pseudo platform with MongoDB databases.
3. Next steps discussed are evaluating the developed code, contributing it to the Readium project on GitHub, revising the LTI integration, and supporting EPUB for Education standards in Readium. Guidelines for learning data interoperability in ISO/IEC JTC1 SC36 WG8 are also summarized.
Presentation of the paper "Primers or Reminders? The Effects of Existing Review Comments on Code Review" published at ICSE 2020.
Authors:
Davide Spadini, G端l Calikli, Alberto Bacchelli
Link to the paper: https://research.tudelft.nl/en/publications/primers-or-reminders-the-effects-of-existing-review-comments-on-c
Maruti Gollapudi has over 17 years of experience as a principal architect, specializing in digital customer experience. Some of his significant contributions include developing a data aggregation and analytics platform hosted on AWS that enables capabilities like social analytics, text analytics using NLP and machine learning, and enterprise search. He has experience building solutions leveraging technologies such as Java, JBoss, Kafka, MongoDB, Solr, Watson, and various analytics and social APIs. Recent projects include developing a headless CMS for page building and dynamic content modification for CNBC, and architecting a middleware for CNBC's integration with Uber to dynamically serve ride-related content.
London atlassian meetup 31 jan 2016 jira metrics-extract slidesRudiger Wolf
油
際際滷s for talk given to London Atlassian User Group Jan 2017. How to get started with Python to extract data from Jira and produce charts for your Agile team.
This document describes a project to analyze GitHub data and develop visualizations and recommendations. The project will have two parts: 1) A visualization part that analyzes metrics like programming languages used, active users, geographic distribution of users, and popular repositories, and 2) A recommendation system that suggests potential contributors or repositories for a given user based on their activity history. The project aims to provide insights into active areas on GitHub and which languages are most widely used. It will also help increase collaboration by recommending potential collaborators and interesting repositories for users. The document outlines the project timeline and division of labor across gathering requirements, design, implementation, and developing the user interface.
The document discusses the evolution of continuous integration and delivery workflows at Red Hat's Fabric8 project. It describes how the workflows have scaled from initially having 4 main Java repositories and 15 Jenkins jobs to now having over 80 repositories and 143 Jenkins jobs. The document outlines the key tools used in their workflows including Jenkins, Kubernetes, Kibana, Grafana and others and demonstrates how automation is applied across the entire software development lifecycle from app creation to continuous improvement. It encourages adopting similar practices to help deliver value to customers faster.
Git and GitHub are open source version control systems. Git is a decentralized version control system, while GitHub is a web-based hosting service for Git repositories that offers additional collaboration features. GitHub allows users to fork repositories to propose and contribute changes. Key features include wikis, task management, bug tracking, and pull requests to merge changes. GitHub is a powerful collaboration tool for software developers and other users due to its features for forking, pulling, and merging code changes.
This document outlines the objectives and content of a course on software development practices and web development. The course covers agile software development methods like Scrum, setting up GitHub repositories, developing static and dynamic web pages using HTML, CSS, and JavaScript, and implementing mini projects like online assessment or ticket reservation systems using these technologies. The course has 5 units covering agile development, Git and GitHub, HTML, CSS, JavaScript basics, and JavaScript objects. Students will learn to apply agile methods, create GitHub repositories, develop web pages with HTML, design pages with CSS, add interactivity with JavaScript, and handle events.
Research data spring: streamlining depositJisc RDM
油
The research data spring project "Streamlining deposit: an OJS to repository plugin" slides for the third sandpit workshop. Project led by Ernesto Priego of City University London.
Efficient GitHub Crawling using the GraphQL APIMatthias Trapp
油
This document discusses efficient crawling of GitHub data using the GraphQL API compared to traditional REST APIs. It presents the Prometheus system, which uses a microservices architecture and event-driven approach to split GraphQL queries and import response data into a database. An experiment shows the Prometheus system is over 3 times faster than an existing crawler when retrieving issues data from GitHub repositories. The document concludes the GraphQL API enables better performance for crawling but query structure also impacts efficiency.
Software development projects are notoriously complex and difficult to deal with. Several support tools such as issue tracking, code review and Source Control Management (SCM) systems have been introduced in the past decades to
ease development activities. While such tools efficiently track the evolution of a given aspect of the project (e.g., bug reports), they provide just a partial view of the project and often lack of advanced querying mechanisms limiting themselves to command line or simple GUI support. This is particularly true for projects that rely on Git, the most popular SCM system today.
In this paper, we propose a conceptual schema for Git and an approach that, given a Git repository, exports its data to a relational database in order to (1) promote data integration with other existing SCM tools and (2) enable writing queries on Git data using standard SQL syntax. To ensure efficiency, our approach comes with an incremental propagation mechanism that refreshes the database content with the latest modifications. We have implemented our approach in Gitana, an open-source tool available on GitHub (https://github.com/SOM-Research/Gitana).
Building Reactive Real-time Data PipelineTrieu Nguyen
油
Topic: Building reactive real-time data pipeline at FPT ?
1) What is Data Pipeline ?
2) Big Data Problems at FPT
+ VnExpress: pageview and heat-map
+ eClick: real-time reactive advertising
3) Solutions and Patterns
4) Fast Data Architecture at FPT
5) Wrap up
Crunching the numbers: Open Source Community Metrics at OSCONDawn Foster
油
Co-presented with Dave Neary at OSCON 2011.
Every community manager knows that community metrics are important, but how do you come up with a plan and figure out what you want to measure? Most community managers have their own set of hacky scripts for extracting data from various sources after they decide what metrics to track. There is no standardised Community Software Dashboard you can use to generate near-real-time stats on your community growth.
Like most open source projects, we have diverse community infrastructure for MeeGo, including Mailman, Drupal, Mediawiki, IRC, git, OpenSuse Build Service, Transifex and vBulletin. We wanted to unify these sources together, extract meaningful statistics from the data we had available to us, and present it to the user in a way that made it easy to see if the community was developing nicely or not.
Building on the work of Pentaho, Talend, MLStats, gitdm and a host of others, we built a generic and open source community dashboard for the MeeGo project, and integrated it into the website. The project was run in the open at http://wiki.meego.com/Metrics/Dashboard and all products of the project are available for reuse.
This presentation will cover the various metrics we wanted to measure, how we extracted the data from a diverse set of services to do it, and more importantly, how you can do it too.
Crunching the numbers: Open Source Community MetricsDawn Foster
油
Every community manager knows that community metrics are important, but how do you come up with a plan and figure out what you want to measure? Most community managers have their own set of hacky scripts for extracting data from various sources after they decide what metrics to track. There is no standardised Community Software Dashboard you can use to generate near-real-time stats on your community growth.
Like most open source projects, we have diverse community infrastructure for MeeGo, including Mailman, Drupal, Mediawiki, IRC, git, OpenSuse Build Service, Transifex and vBulletin. We wanted to unify these sources together, extract meaningful statistics from the data we had available to us, and present it to the user in a way that made it easy to see if the community was developing nicely or not.
Building on the work of Pentaho, Talend, MLStats, gitdm and a host of others, we built a generic and open source community dashboard for the MeeGo project, and integrated it into the website. The project was run in the open at http://wiki.meego.com/Metrics/Dashboard and all products of the project are available for reuse.
This presentation will cover the various metrics we wanted to measure, how we extracted the data from a diverse set of services to do it, and more importantly, how you can do it too.
Flex and rigid-flex printed circuit boards (PCBs) can be considered at the basic level some of the most complex PCBs in the industry. With that in mind, its incredibly easy to make a mistake, to leave something out, or to create a design that was doomed from the start.
Such design failures can end up leading to an eventual failure by delamination, short circuits, damage to the flex portions, and many other things. The easiest way to circumvent these is to start at the beginning, to design with preventing failure in mind rather than trying to fix existing designs to accommodate for problems.
In this webinar, we cover how to design flex and rigid-flex PCBs with failure prevention in mind to save time, money, and headaches, and what failure can look like.
For more information on our flex and rigid-flex PCB solutions, visit https://www.epectec.com/flex.
About
Practice Head is assembled with Practice Torpedo intended for carrying out exercise firings. It is assembled with Homing Head in the forward section and oxygen flask in the rear section. Practice Head imparts positive buoyancy to the Torpedo at the end of run. The Practice Head is divided into two compartments viz. Ballast Compartment (Houses Light Device, Depth & Roll Recorder, Signal Flare Ejector, Discharge Valve, Stop Cock, Water discharge Valve, Bellow reducing Valve, Release Mechanism, Recess, Bypass Valve, Pressure Equalizer, Float, Sinking Plug etc.) which provides positive buoyancy at the end of run by discharging water (140 ltrs.) filled in the compartment and Instrument compartment (dry), houses (safety & recovery unit and its battery, combined homing and influence exploder equipment, noise maker, bollards & safety valve etc.) The recess in Ballast compartment houses the float which gets inflated at the end of run to provide floatation to the surfaced Torpedo. Several hand holes/recesses are provided on the casing/shell of Practice Head for assembly of the following components:-
a) Signal Flare Ejector Assembly
b) Depth and Roll Recorder Assembly
c) Light Device
d) Pressure equalizer
e) Drain/Discharge Valve assembly
f) Bollard Assembly
g) Holding for Floater/Balloon Assembly
h) Sinking Valve
i) Safety Valve
j) Inspection hand hole
Technical Details:
SrNo Items Specifications
1 Aluminum Alloy (AlMg5)
Casing Body Material: AlMg5
Larger Outer Diameter of the Casing: 532.4 MM
Smaller Outer Diameter of the Casing: 503.05 MM
Total Length: 1204.20 MM
Thickness: 6-8 mm
Structural Details of Casing: The casing is of uniform outer dia for a certain distance from rear side and tapered from a definite distance to the front side. (Refer T-DAP-A1828-GADWG-PH- REV 00)
Slope of the Tapered Portion: 1/8
Mass of Casing (Without components mounting, but including the ribs and collars on the body): 58.5 kg
Maximum External Test Pressure: 12 kgf/cm2
Maximum Internal Test Pressure:-
i. For Ballast Compartment: 2 kgf/cm2
ii. For Instrument Compartment: 1 kgf/cm2
Innerspace of casing assembly have 2 compartments:-
i. Ballast Compartment and
ii. Instrument Compartment
Cut outs/ recesses shall be provided for the assembly of following components.
a) Signal Flare Ejector Assembly
b) Depth and Roll Recorder Assembly
c) Light Device
d) Pressure Equalizer
e) Drain/ discharge valve assembly
2 Front Side Collar Material: AlMg5
Maximum Outer Diameter: 500 MM
Pitch Circle Diameter: 468 MM
All Dimensions as per drawing T-DAP-A1828-MDWG-C&R-REV-00
Application:
In a torpedo, the ballast components and instrument compartment play crucial roles in maintaining stability, control, and overall operational effectiveness. The ballast system primarily manages buoyancy and trim, ensuring that the torpedo maintains a stable trajectory underwater.
Improving Surgical Robot Performance Through Seal Design.pdfBSEmarketing
油
Ever wonder how something as "simple" as a seal can impact surgical robot accuracy and reliability? Take quick a spin through this informative deck today, and use what you've learned to build a better robot tomorrow.
TASK-DECOMPOSITION BASED ANOMALY DETECTION OF MASSIVE AND HIGH-VOLATILITY SES...samueljackson3773
油
The Science Information Network (SINET) is a Japanese academic backbone network for more than 800
universities and research institutions. The characteristic of SINET traffic is that it is enormous and highly
variable
Cloud Cost Optimization for GCP, AWS, Azurevinothsk19
油
Reduce Cloud Waste across AWS, GCP, Azure and Optimize Cloud Cost with a structured approach and improve your bottomline or profitability. Decide whether you want to outsource or manage it in house.
Uses established clustering technologies for redundancy
Boosts availability and reliability of IT resources
Automatically transitions to standby instances when active resources become unavailable
Protects mission-critical software and reusable services from single points of failure
Can cover multiple geographical areas
Hosts redundant implementations of the same IT resource at each location
Relies on resource replication for monitoring defects and unavailability conditions
2. -
Lets spare a moment to think about
what is happening with a giant open-
source software project.
At a well-known open-source project
4. Source: Linux Kernel Report 2017, Linux Foundation
Figure 1:
Top companies
contributing to
the Linux kernel,
4.8 4.13 in 2017
Linux Kernel Contributors
5. Table of Contents
1. What is PyQtGraph and where does it come from?
2. Open Source Feature Development: Known Facts
3. Analysis of PyQtGraphs Feature Development Process
4. Guidelines for PyQtGraphs Feature Process
Improvements
5. Conclusions
6. PyQtGraph: A graphic library
Functionalities:
Basic 2D plotting
Image display with interactive
lookup tables
3D graphics system
Library of widgets and modules
useful for science/engineering
applications
Source: www.pyqtgraph.orgFigure 2: Histogram drawn with
PyQtGraph
8. Feature Development in Open-Soure Software
Iterative process with a public repository
Mailing list, Forum Boards
Small, frequent changes to code repository
Few key developers (that is, limited resources)
Atleast one maintainer
10. Applying Pirate Metrics to
PyQtGraph Project
Figure 4: The
AARRR! Metrics
for PyQtGraph
Source:
Pirate Metrics: A new
way to measure open
source community
success by Gaby Fachler
11. To Accept or Not to Accept?
A dilemma often presenting itself to the maintainer:
One side:
Accepting (new) code appeases the feature contributor; (possibly also) other
users
Other side:
New code becomes the responsibility of the maintainer
12. PyQtGraphs Code Development
Bug Reports and New Feature Proposals on GitHub Issues, GitHub Pull Request
and PyQtGraph GoogleGroups pages
Maintainer of the GitHub (and also founder): Luke Campagnola
8-10 user queries/feature proposals every month
60 percent of user queries/feature proposals are answered
About 40 listed contributors
All development is voluntary-based
FAQ for prospective contributors is available
13. PyQtGraph Google Group Statistics
Figure 5: Data Related to Number of Posts on PyQtGraph Google
Group Forum site
14. Analysing the Library Forum Posts
Only posts where the maintainer had commented were analysed
Corresponding changes in code in Github were studied
A list of observations was created
3 cases of feature development were studied
The 3 cases represented different feature development outcomes
15. A Successful Development Cycle
aa
Figure 6: Timeline
of events for a
typical successful
feature-addition
process.
16. Case of Unsuccessful Feature
Development
Figure 7:
Timeline of
interactions for
the New Time
Axis proposed
feature
17. Suggested Improvements for Feature
Development Process
Need for a Collaboration Tool.
(Objective: focus the current development resources towards feature completion)
A new metric to assign collaboration level for new feature code posts
Visibility of across GithHub and Google Groups forum
While feature development in progress: correction list auto-tracking features
21. Conclusions: Beneficiaries &
Limitiations of Scope
This study could aid:
a developer wishing to contribute to the PyQtGraph project code
maintainer of the PyQtGraph project
User studying the open-source process
- Limitations:
Research based only on one open-source library
Each open-source project may have its own dynamics
22. References:
1. Luke Campagnola. PyQtGraph Project Home page:
http://www.pyqtgraph.org/ [Internet] [cited 24 April 2018]
2. Luke Campagnola. PyQtGraph Project Official Documentation page:
http://www.pyqtgraph.org/documentation/installation.html [Internet] [cited 24
April 2018]
3. Pirate Metrics: A new way to measure open source community success.
https://opensource.com/business/16/6/pirate-metrics [Internet] [cited 24 April
2018]
24. Plotting a Graph
Imagine an Apple Tree that grows
uniformly at the rate of 1 meter per
year. It was planted in 2010. Can you
show how it has grown?