Machine Learning Systems Engineering (MLSE) is a collective effort started 5 years ago in Japan to address challenges in developing and deploying machine learning systems. Key activities included panel discussions at conferences to raise awareness among software engineers, workshops identifying gaps between ML and software engineering practices, and forming a special interest group to organize further discussions. Working groups studied challenges such as fairness, infrastructure, and development processes. International collaborations helped spread ideas to other countries. Research projects explored techniques for requirements engineering, testing, debugging and assuring quality in machine learning systems to develop the new field of machine learning systems engineering. Guidelines and books were also created to establish best practices.
This document outlines the evolution of software development influenced by manufacturing practices, highlighting historical phases from the late 20th century to current advancements like Agile and DevOps. It emphasizes the importance of continuous integration, feedback, and adapting to changes in customer requirements while also exploring the role of software in enhancing manufacturing efficiency. The future of manufacturing is framed as increasingly intertwined with software development, allowing for innovative design and dynamic adjustments post-production.
The document discusses the varying interpretations of 'artificial intelligence' (AI) and its implications for the fields of computer science, science, and engineering. It outlines three interpretations: AI as a field of study, AI as a sophisticated technology often misrepresented, and AI as an emerging frontier within computer science that drives new computation models like deep learning. The speaker emphasizes the confusion surrounding AI terminology and its evolving role in technological advancement, highlighting the limitations and potential future of AI applications.
The document discusses the evolution of artificial intelligence (AI) and deep learning, highlighting transitions through three waves of AI development. It explores concepts like blackbox optimization, the limitations of machine learning, and the impact of AI on society and engineering. The author warns against misconceptions about AI safety, explainability, and optimization, advocating for a nuanced understanding of its complexities and implications.
This document discusses deep learning and inductive programming. It begins by defining deep learning as a stateless function that can take in high-dimensional or categorical variables as input and provide low-dimensional outputs for classification or high-dimensional outputs for generation. The document then provides an example of converting Celsius to Fahrenheit using a simple formula. It contrasts this with an inductive, data-driven approach requiring no prior knowledge of the model or algorithm. The document suggests neural networks can approximate any high-dimensional function, acting as a universal computing mechanism. It speculates that by 2020, over half of newly developed software will have inductively trained components, representing a large paradigm shift. Finally, it discusses how new engineering disciplines are needed as new
1) Deep neural networks can output any point in space but this is problematic when outputs must remain within a defined feasible region.
2) The presentation proposes transforming the output space to guarantee outputs fall within the feasible region. This is done by bounding the space to a hypercube around a pivot point, then shrinking/extending points toward the origin while keeping the pivot interior.
3) With this transformation, the output is guaranteed to remain feasible for any model parameters or inputs, allowing training to continue while enforcing constraints.