This paper aims to develop a network model to measure academic influence objectively. It builds a network of authors who co-authored with Paul Erdos (Erdos1) using the small world model. It introduces the concept of "core" to classify authors by their influence. Authors with larger average degree, core, and connections outside their core field are labeled "transferors" with broader influence. Authors with influence mainly within their core are called "insiders." The model is also applied to measure paper influence through citations. Finally, it constructs a movie actor network to validate the model and analyze influential actors. The paper discusses the model's strengths, weaknesses, and room for improvement in measuring influence accurately.
Sentiment Analysis and Classification of Tweets using Data MiningIRJET Journal
?
This document summarizes research on using data mining techniques to perform sentiment analysis on tweets. The researchers collected tweets from Twitter and preprocessed the text to make it usable for building sentiment classifiers. They used three classifiers - K-Nearest Neighbor, Naive Bayes, and Decision Tree - and compared the results to determine which provided the best accuracy. Rapid Miner tool was used to preprocess the text, build the classifiers, and analyze the results. The goal was to determine people's sentiments expressed in their tweets and correctly classify them.
Not Good Enough but Try Again! Mitigating the Impact of Rejections on New Con...Aleksi Aaltonen
?
Presentation at the University of Miami on 3 December 2021 on how Stack Overflow improved the retention of new contributors whose initial question is rejected (closed) as substandard. The presentation is based on a paper coauthored with Sunil Wattal.
This certificate recognizes Yan Xu, Long Cheng, Pengpeng Tian, and Chenshu Liu from Dongbei University of Finance and Economics for receiving honorable mention in the 2014 Interdisciplinary Contest In Modeling with faculty advisor Chenshu Liu. The certificate is signed by the contest director and head judge.
Using Networks to Measure Influence and ImpactYunhao Zhang
?
1. The document describes a network analysis model to measure influence in networks. It builds a co-authorship network of researchers who collaborated with mathematician Paul Erdos and analyzes its properties.
2. It proposes an Entropy-Weight-Based Gray Relational Analysis (EWGRA) model to quantitatively analyze correlations between subjects while avoiding human interference in weighting. This model combines centrality measures and PageRank to measure the most influential researcher.
3. The model also proposes a Food Chain Model to measure the significance of research papers by simulating how information spreads. By applying these models to different networks, the most influential researcher and research paper are identified.
This paper analyzes networks of scientific collaborations and dolphin social relationships. It establishes co-author and dolphin networks from data, then analyzes properties like degree distribution, information entropy, and influential nodes. Four indicators of cooperation, degree, quadratic correlation, and betweenness are used to calculate a Marshall Entropy Index (MEI) for the co-author network. A modified PageRank algorithm applied to a quotation network identifies the most influential paper. Analysis finds the dolphin network exhibits small world properties more than scale-free properties, with the dolphin SN100 as most influential.
Performance characterization in computer visionpotaters
?
This document provides a tutorial on evaluating the performance of computer vision algorithms. It explains that properly characterizing performance through statistical analysis is important for advancing the field. The typical process involves running algorithms on test data and tracking true positives, false positives, etc. Performance is usually assessed using tools like ROC curves that account for the tradeoff between correctness and errors. Comparing multiple algorithms requires ensuring statistical significance and using standardized datasets.
Organisering av digitale prosjekt: Hva har IT-bransjen l?rt om store prosjekter?Torgeir Dings?yr
?
IT-bransjen har gjort store endringer i m?ten de gjennomf?rer prosjekter p? gjennom bruk av smidige metoder. Disse metodene ble f?rst brukt p? sm?, samlokaliserte team men brukes n? ogs? i store prosjekter med mange team og flere hundre utviklere. Hvordan jobber IT-bransjen for ? sikre vellykkede store prosjekter?
This document appears to be a thesis submitted for the degree of Master of Technology in CSE (Advanced Network) at the Atal Bihari Vajpayee Indian Institute of Information Technology and Management Gwalior. The thesis proposes an improvement to the Enhanced Real Time Routing Protocol with Load Distribution (ERTLD) for mobile wireless sensor networks. The improvement, called the Improved Enhanced Real Time Routing Protocol (IERTLD), uses a backward mechanism that provides better performance than other techniques such as fast forwarding. It also introduces a Voice PAN Coordinator that can take over the role of the PAN Coordinator if it fails or goes below a threshold, extending the overall network lifetime by 30%
This document provides a biography and overview of a textbook about programming on parallel machines by Norm Matloff. It discusses that the book focuses on practical parallel programming using platforms like OpenMP, CUDA and MPI. It is aimed at students who are reasonably proficient in programming and linear algebra. The book uses examples in C/C++ and R to illustrate fundamental parallelization principles.
This document is a mathematics and engineering project that studies agreement among decentralized decision makers. It introduces four models of opinion dynamics: a basic model showing agreement to the average initial opinion under certain conditions, a model introducing stubborn agents, a model with a state-dependent communication radius, and a random realization model. The project will simulate and analyze the results of these models and discuss their applications to problems like landmine removal, forest fire monitoring, load balancing, and other distributed systems problems.
Analysis and Simulation of Scientic NetworksFelix Puetsch
?
This document is Felix Putsch's 2003 diploma thesis submitted to the Institute of Theoretical Physics at the University of Cologne under the supervision of Prof. D. Stauffer. The thesis analyzes and simulates scientific collaboration networks using network models such as regular lattices, Erdos-Renyi random networks, Watts-Strogatz small-world networks, and Barabasi-Albert scale-free networks. It builds an empirical collaboration network from bibliographic data and compares its properties to these theoretical network models. The thesis also uses spin models on the empirical network to study phenomena like leadership effects and phase transitions.
TOGETHER: TOpology GEneration THrough HEuRisticsSubin Mathew
?
Network Virtualization is a growing technological process that combines the hardware and software elements in the physical networks and brings it together on a software level. The aim of this project is to develop the process of deploying virtual networks easily. The project involves a software developed by us called ¡°TOGETHER: TOpology GEneration THrough HEuRistics"written in Perl in its simplest form without dependencies so that it could be deployed on any environment. TOGETHER is an isomorphic graph modelling solution used to allow users to make use of topology generators and software like AutoNetkit to make topologies that work on virtual systems.TOGETHER is designed to work in Juniper Networks Virtual Private Cloud architecture and has possibilities for supporting much more. TOGETHER also manages how multiple topologies are interconnected and aims to help researchers work with network virtualization.
This document provides an overview of machine learning interpretability. It defines interpretability as the ability to explain model decisions in understandable terms. Not all systems and models require interpretability. The document discusses the goals of interpretability like building trustworthy models and ensuring fairness. Interpretability can examine models globally or locally. Popular interpretability techniques discussed include LIME, LRP, DeepLIFT and SHAP. LIME approximates a model's behavior locally with an interpretable model. LRP, DeepLIFT and SHAP attribute importance to input features for a model's predictions.
A Usability Evaluation carried out on my second year Brunel Group project.
A.R.C. (Augmented Reality Communicator), is an augmented reality social networking application , designed and built for my second year group project at Brunel University.
Master Thesis: The Design of a Rich Internet Application for Exploratory Sear...Roman Atachiants
?
Users who cannot formulate a precise query but know there must be a good answer somewhere, often rely on exploratory search. This requires an interactive and responsive system, or else the user will soon give up. As data bases are becoming larger, more specialized, and more distributed this calls for a Rich Internet Application, fast enough to keep pace with the users explorations. This thesis studies and implements a system, called MultiMap, which computes similarity maps in real-time. This entailed: (1) precomputing every data structure that does not change after the initial query, (2) optimizing algorithms for zooming and map generation (3) and providing a cognitively appropriate visualization of high dimensional space. Applied to a very large movie database, it resulted in a highly responsive, satisfying, usable system.
This document is a degree project from KTH Royal Institute of Technology that examines using social media analysis to predict stock prices. Specifically, it collected Twitter data related to Microsoft, Netflix, and Walmart and used machine learning algorithms like artificial neural networks to analyze the relationship between sentiment in tweets and future stock movement. The best model achieved 80% accuracy in predicting the direction of price changes for one of the companies based on Twitter sentiment alone.
The document describes a dissertation submitted for a Bachelor of Science degree. It outlines the objectives of creating an interactive web tutorial to inform newly diagnosed diabetics about their condition in a fun way. It details the methodology, which involved designing page layouts, learning Flash and ActionScript for interactivity, and creating a Java game applet. The achievements were successfully meeting the objectives of an informative yet entertaining resource through the implemented system, which utilized HTML, CSS, JavaScript, Flash and Java applets.
This document is a seminar report on neural networks submitted by three students - Suraj Maurya, Sanjeev Vishawakarma, and Sandeep Warang. It was completed under the guidance of Prof. Dinesh B. Deore at Rizvi College of Engineering. The report provides an overview of neural networks, including their biological inspiration, methodology, implementation, and use for prediction. It discusses topics such as the structure of neurons, characteristics of artificial neural networks, activation functions, perceptrons, feedforward neural networks, and backpropagation. The goal is to analyze different parameters of a system using artificial neural networks.
Data Quality and Record Linkage Techniques.pdfmoinul033
?
This document provides an introduction and preface to a book on data quality and record linkage techniques. It discusses the intended audience and objectives of improving understanding of data quality issues at an applied level. It also acknowledges contributors to the book and thanks various people for their support. The preface outlines some of the key topics that will be covered in the book, including basic editing techniques in Chapter 5, advanced editing and imputation in Chapter 7, record linkage in Chapter 8-9, applications of record linkage in Chapters 15-17, and a summary in Chapter 20.
This document is a project report submitted by Aina Gupta to the Department of Applied Sciences at ITM University, Gurgaon, India in partial fulfillment of the requirements for a Master of Science degree in Mathematics. The report explores new encryption and decryption techniques on ZN(3) and ZN(4) under the supervision of Prof. N. Chandramowliswaran and Dr. Gaurav Gupta. The report includes an introduction to cryptography, preliminaries on relevant mathematical concepts, and several sections proposing and analyzing new bijective encryption functions on ZN(3) and ZN(4).
This document is a dissertation presented by Kurt Portelli for the degree of Master of Science at the University of Glasgow. It discusses distributed statistical learning and knowledge diffusion in IoT environments. Specifically, it proposes a system that allows each sensor to locally gather knowledge through statistical learning and distribute it efficiently to minimize power consumption and transmission errors. It investigates how increasing the allowed error affects query accuracy on the system. The document outlines the contributions of the work, including the use of an ensemble learning approach and a "reliability" variable to select the best acquired statistical knowledge for different input spaces.
Corporate bankruptcy prediction using Deep learning techniquesShantanu Deshpande
?
This document proposes using deep learning techniques like LSTM neural networks to predict corporate bankruptcy by integrating both financial ratio data and textual disclosures from annual reports. It notes that previous studies have largely relied on statistical models or used only financial data with machine learning. The researcher aims to determine if adding textual data to an LSTM model improves prediction performance over a CNN model using only financial ratios. The document outlines the research question, objectives, and provides an overview of previous bankruptcy prediction studies using statistical, machine learning and deep learning methods.
The document discusses interface instability in software libraries. It investigates the origin, impact, and costs of changes to public interfaces in libraries. The author analyzes a dataset of Maven library dependencies to study how frequently library interfaces change without warning, the impact of these changes on dependent systems, and how interface changes relate to changes in library functionality. The analysis employs techniques like measuring differences between library versions using compression to quantify functionality changes.
Study of different approaches to Out of Distribution GeneralizationMohamedAmineHACHICHA1
?
Deep learning models perform well when the new data is within the scope of its training. When faced with novel examples that are significantly different, these models often struggle. These models excel at interpolating and only generalize well to examples that stay within the limits of the training data.
Two elegant frameworks to address this issues are causal inference and domain adaptation. They both intend to learn models capable of generalizing outside of the training distribution. However, they differ in their assumptions and points of view. This work will thus try to introduce both the notion of causality and domain adaptation.
This document discusses 10 examples of using network analysis techniques in various domains:
1. Using social network analysis to map the workforce and labor supply as a complex system.
2. Analyzing social network and interest graph data to power future shopping through identifying customer segments and influencers.
3. Using social network diagrams by drug marketers to locate influential doctors by identifying prescribing patterns and relationships between doctors.
Geometric Processing of Data in Neural NetworksLorenzo Cassani
?
Feed-forward neural networks can be considered as geometric transformations that act on input data points. It is known that, during training, those transformations generally bring points belonging to the same class closer together and drive points belonging to different classes farther away. The purpose of this work is to carry out a numerical analysis of how this description varies during training. The training task consisted of a binary (e.g.\ even digits vs.\ odd digits) of the elements of MNIST and other similar structured datasets, in order to have a clear vision of the link between structure in data and possible noteworthy behaviours in the evolution of the inner geometries of neural networks. Particular attention has been reserved to data points which neural networks struggle more to correctly classify, and their connection to the neural networks generalization capability.
On Using Network Science in Mining Developers Collaboration in Software Engin...IJDKP
?
Background: Network science is the set of mathematical frameworks, models, and measures that are used to understand a complex system modeled as a network composed of nodes and edges. The nodes of a network represent entities and the edges represent relationships between these entities. Network science has been used in many research works for mining human interaction during different phases of software engineering (SE). Objective: The goal of this study is to identify, review, and analyze the published research works that used network analysis as a tool for understanding the human collaboration on different levels of software development. This study and its findings are expected to be of benefit for software engineering practitioners and researchers who are mining software repositories using tools from network science field. Method: We conducted a systematic literature review, in which we analyzed a number of selected papers from different digital libraries based on inclusion and exclusion criteria. Results: We identified 35 primary studies (PSs) from four digital libraries, then we extracted data from each PS according to a predefined data extraction sheet. The results of our data analysis showed that not all of the constructed networks used in the PSs were valid as the edges of these networks did not reflect a real relationship between the entities of the network. Additionally, the used measures in the PSs were in many cases not suitable for the used networks. Also, the reported analysis results by the PSs were not, in most cases, validated using any statistical model. Finally, many of the PSs did not provide lessons or guidelines for software practitioners that can improve the software engineering practices. Conclusion: Although employing network analysis in mining developers¡¯ collaboration showed some satisfactory results in some of the PSs, the application of network analysis needs to be conducted more carefully. That is said, the constructed network should be representative and meaningful, the used measure needs to be suitable for the context, and the validation of the results should be considered. More and above, we state some research gaps, in which network science can be applied, with some pointers to recent advances that can be used to mine collaboration networks.
This document provides a biography and overview of a textbook about programming on parallel machines by Norm Matloff. It discusses that the book focuses on practical parallel programming using platforms like OpenMP, CUDA and MPI. It is aimed at students who are reasonably proficient in programming and linear algebra. The book uses examples in C/C++ and R to illustrate fundamental parallelization principles.
This document is a mathematics and engineering project that studies agreement among decentralized decision makers. It introduces four models of opinion dynamics: a basic model showing agreement to the average initial opinion under certain conditions, a model introducing stubborn agents, a model with a state-dependent communication radius, and a random realization model. The project will simulate and analyze the results of these models and discuss their applications to problems like landmine removal, forest fire monitoring, load balancing, and other distributed systems problems.
Analysis and Simulation of Scientic NetworksFelix Puetsch
?
This document is Felix Putsch's 2003 diploma thesis submitted to the Institute of Theoretical Physics at the University of Cologne under the supervision of Prof. D. Stauffer. The thesis analyzes and simulates scientific collaboration networks using network models such as regular lattices, Erdos-Renyi random networks, Watts-Strogatz small-world networks, and Barabasi-Albert scale-free networks. It builds an empirical collaboration network from bibliographic data and compares its properties to these theoretical network models. The thesis also uses spin models on the empirical network to study phenomena like leadership effects and phase transitions.
TOGETHER: TOpology GEneration THrough HEuRisticsSubin Mathew
?
Network Virtualization is a growing technological process that combines the hardware and software elements in the physical networks and brings it together on a software level. The aim of this project is to develop the process of deploying virtual networks easily. The project involves a software developed by us called ¡°TOGETHER: TOpology GEneration THrough HEuRistics"written in Perl in its simplest form without dependencies so that it could be deployed on any environment. TOGETHER is an isomorphic graph modelling solution used to allow users to make use of topology generators and software like AutoNetkit to make topologies that work on virtual systems.TOGETHER is designed to work in Juniper Networks Virtual Private Cloud architecture and has possibilities for supporting much more. TOGETHER also manages how multiple topologies are interconnected and aims to help researchers work with network virtualization.
This document provides an overview of machine learning interpretability. It defines interpretability as the ability to explain model decisions in understandable terms. Not all systems and models require interpretability. The document discusses the goals of interpretability like building trustworthy models and ensuring fairness. Interpretability can examine models globally or locally. Popular interpretability techniques discussed include LIME, LRP, DeepLIFT and SHAP. LIME approximates a model's behavior locally with an interpretable model. LRP, DeepLIFT and SHAP attribute importance to input features for a model's predictions.
A Usability Evaluation carried out on my second year Brunel Group project.
A.R.C. (Augmented Reality Communicator), is an augmented reality social networking application , designed and built for my second year group project at Brunel University.
Master Thesis: The Design of a Rich Internet Application for Exploratory Sear...Roman Atachiants
?
Users who cannot formulate a precise query but know there must be a good answer somewhere, often rely on exploratory search. This requires an interactive and responsive system, or else the user will soon give up. As data bases are becoming larger, more specialized, and more distributed this calls for a Rich Internet Application, fast enough to keep pace with the users explorations. This thesis studies and implements a system, called MultiMap, which computes similarity maps in real-time. This entailed: (1) precomputing every data structure that does not change after the initial query, (2) optimizing algorithms for zooming and map generation (3) and providing a cognitively appropriate visualization of high dimensional space. Applied to a very large movie database, it resulted in a highly responsive, satisfying, usable system.
This document is a degree project from KTH Royal Institute of Technology that examines using social media analysis to predict stock prices. Specifically, it collected Twitter data related to Microsoft, Netflix, and Walmart and used machine learning algorithms like artificial neural networks to analyze the relationship between sentiment in tweets and future stock movement. The best model achieved 80% accuracy in predicting the direction of price changes for one of the companies based on Twitter sentiment alone.
The document describes a dissertation submitted for a Bachelor of Science degree. It outlines the objectives of creating an interactive web tutorial to inform newly diagnosed diabetics about their condition in a fun way. It details the methodology, which involved designing page layouts, learning Flash and ActionScript for interactivity, and creating a Java game applet. The achievements were successfully meeting the objectives of an informative yet entertaining resource through the implemented system, which utilized HTML, CSS, JavaScript, Flash and Java applets.
This document is a seminar report on neural networks submitted by three students - Suraj Maurya, Sanjeev Vishawakarma, and Sandeep Warang. It was completed under the guidance of Prof. Dinesh B. Deore at Rizvi College of Engineering. The report provides an overview of neural networks, including their biological inspiration, methodology, implementation, and use for prediction. It discusses topics such as the structure of neurons, characteristics of artificial neural networks, activation functions, perceptrons, feedforward neural networks, and backpropagation. The goal is to analyze different parameters of a system using artificial neural networks.
Data Quality and Record Linkage Techniques.pdfmoinul033
?
This document provides an introduction and preface to a book on data quality and record linkage techniques. It discusses the intended audience and objectives of improving understanding of data quality issues at an applied level. It also acknowledges contributors to the book and thanks various people for their support. The preface outlines some of the key topics that will be covered in the book, including basic editing techniques in Chapter 5, advanced editing and imputation in Chapter 7, record linkage in Chapter 8-9, applications of record linkage in Chapters 15-17, and a summary in Chapter 20.
This document is a project report submitted by Aina Gupta to the Department of Applied Sciences at ITM University, Gurgaon, India in partial fulfillment of the requirements for a Master of Science degree in Mathematics. The report explores new encryption and decryption techniques on ZN(3) and ZN(4) under the supervision of Prof. N. Chandramowliswaran and Dr. Gaurav Gupta. The report includes an introduction to cryptography, preliminaries on relevant mathematical concepts, and several sections proposing and analyzing new bijective encryption functions on ZN(3) and ZN(4).
This document is a dissertation presented by Kurt Portelli for the degree of Master of Science at the University of Glasgow. It discusses distributed statistical learning and knowledge diffusion in IoT environments. Specifically, it proposes a system that allows each sensor to locally gather knowledge through statistical learning and distribute it efficiently to minimize power consumption and transmission errors. It investigates how increasing the allowed error affects query accuracy on the system. The document outlines the contributions of the work, including the use of an ensemble learning approach and a "reliability" variable to select the best acquired statistical knowledge for different input spaces.
Corporate bankruptcy prediction using Deep learning techniquesShantanu Deshpande
?
This document proposes using deep learning techniques like LSTM neural networks to predict corporate bankruptcy by integrating both financial ratio data and textual disclosures from annual reports. It notes that previous studies have largely relied on statistical models or used only financial data with machine learning. The researcher aims to determine if adding textual data to an LSTM model improves prediction performance over a CNN model using only financial ratios. The document outlines the research question, objectives, and provides an overview of previous bankruptcy prediction studies using statistical, machine learning and deep learning methods.
The document discusses interface instability in software libraries. It investigates the origin, impact, and costs of changes to public interfaces in libraries. The author analyzes a dataset of Maven library dependencies to study how frequently library interfaces change without warning, the impact of these changes on dependent systems, and how interface changes relate to changes in library functionality. The analysis employs techniques like measuring differences between library versions using compression to quantify functionality changes.
Study of different approaches to Out of Distribution GeneralizationMohamedAmineHACHICHA1
?
Deep learning models perform well when the new data is within the scope of its training. When faced with novel examples that are significantly different, these models often struggle. These models excel at interpolating and only generalize well to examples that stay within the limits of the training data.
Two elegant frameworks to address this issues are causal inference and domain adaptation. They both intend to learn models capable of generalizing outside of the training distribution. However, they differ in their assumptions and points of view. This work will thus try to introduce both the notion of causality and domain adaptation.
This document discusses 10 examples of using network analysis techniques in various domains:
1. Using social network analysis to map the workforce and labor supply as a complex system.
2. Analyzing social network and interest graph data to power future shopping through identifying customer segments and influencers.
3. Using social network diagrams by drug marketers to locate influential doctors by identifying prescribing patterns and relationships between doctors.
Geometric Processing of Data in Neural NetworksLorenzo Cassani
?
Feed-forward neural networks can be considered as geometric transformations that act on input data points. It is known that, during training, those transformations generally bring points belonging to the same class closer together and drive points belonging to different classes farther away. The purpose of this work is to carry out a numerical analysis of how this description varies during training. The training task consisted of a binary (e.g.\ even digits vs.\ odd digits) of the elements of MNIST and other similar structured datasets, in order to have a clear vision of the link between structure in data and possible noteworthy behaviours in the evolution of the inner geometries of neural networks. Particular attention has been reserved to data points which neural networks struggle more to correctly classify, and their connection to the neural networks generalization capability.
On Using Network Science in Mining Developers Collaboration in Software Engin...IJDKP
?
Background: Network science is the set of mathematical frameworks, models, and measures that are used to understand a complex system modeled as a network composed of nodes and edges. The nodes of a network represent entities and the edges represent relationships between these entities. Network science has been used in many research works for mining human interaction during different phases of software engineering (SE). Objective: The goal of this study is to identify, review, and analyze the published research works that used network analysis as a tool for understanding the human collaboration on different levels of software development. This study and its findings are expected to be of benefit for software engineering practitioners and researchers who are mining software repositories using tools from network science field. Method: We conducted a systematic literature review, in which we analyzed a number of selected papers from different digital libraries based on inclusion and exclusion criteria. Results: We identified 35 primary studies (PSs) from four digital libraries, then we extracted data from each PS according to a predefined data extraction sheet. The results of our data analysis showed that not all of the constructed networks used in the PSs were valid as the edges of these networks did not reflect a real relationship between the entities of the network. Additionally, the used measures in the PSs were in many cases not suitable for the used networks. Also, the reported analysis results by the PSs were not, in most cases, validated using any statistical model. Finally, many of the PSs did not provide lessons or guidelines for software practitioners that can improve the software engineering practices. Conclusion: Although employing network analysis in mining developers¡¯ collaboration showed some satisfactory results in some of the PSs, the application of network analysis needs to be conducted more carefully. That is said, the constructed network should be representative and meaningful, the used measure needs to be suitable for the context, and the validation of the results should be considered. More and above, we state some research gaps, in which network science can be applied, with some pointers to recent advances that can be used to mine collaboration networks.
On Using Network Science in Mining Developers Collaboration in Software Engin...IJDKP
?
ÃÀÈüÂÛÎÄ
1. for of?ce use only
T1
T2
T3
T4
Team Control Number
30254
Problem Chosen
C
for of?ce use only
T1
T2
T3
T4
Measuring the in?uence in the network
Summary
This paper aims to build a internet based network model to measure the impact of in?uence.
Validity and science have been veri?ed through several examples.
Firstly, a network of Erdos1 is built through the small world model. The model aims at digging
out the relationship among the Erdos1 co-author. To quantify the impact of the in?uence objectively,
the concept of the core is introduced and we divide the co-authors of the Erdos1 into different class.
Generally speaking, the impact of the in?uence is near of the authors from different classes. We set
the biggest number of core to be the main core (elite core). The authors among this main core are
the ones that have relatively larger impact of in?uence. To study the different characteristic of the
impact of the authors, the concept of average degree, average core and coll are introduced. These
concepts are introduced to determine the range of in?uence. We ?nd that some authors whom we call
them the ¡°transferor¡± with relatively larger impact of in?uence are not only active in their own core
?eld but also are very interactive outside their core ?eld through analyzing the data processed by the
model. De?nitely, this phenomenon enables the advanced theory of the main core can be delivered to
the other research ?elds, and meanwhile it can absorb the advanced theories from the other research
?elds. This interactive process boosts the prosperity of the whole academic ?eld. On the other hand,
some authors whom we call them the ¡°insider¡± with large impact of in?uence only collaborate within
their own core, what¡¯ more, a number of these insiders are the leaders in the researching ?eld. One
main feature of them is that they all are abled man, so the way they cooperate makes it more likely to
contribute creativities to their ?eld.
Secondly, this system of measurement can also be applied to measuring the quality and the im-
portance of the papers. Another type of in?uence measure might be to compare the signi?cance of
a research paper by analyzing the important works that follow from its publication. We determine
the times that a paper is quoted by introducing the concept of degree, and then calculate the core
number on the basis of the out degree. This is the standard of the impact of in?uence of the paper.
Meanwhile, we analyze the impact towards the outside by studying the coll, average coll and average
output degree.
Thirdly, we design an example of movie actor through setting up a network of a person¡¯s relation-
ship. We want to analyze the most in?uential person and also make a wise decision to decide whom
to work with. This is to verify the validity and the science of this model.
In the end, we make a conclusion of strength, weakness, sensitivity analysis and the power of the
model. Still we can¡¯t overcome the original disadvantages of the model. There are still many factors
that can in?uence the validity of the model. This is also the part that we think the model can be
improved, and the correctness of the model still needs to be discussed.
In this paper, some techniques for analysis of large networks are presented on the tables and the
?gures using the program Pajek
Key words: Erdos;Small World Model; Core; Complex Network; Influence; Measurement
3. Team #30254 Page 1 of 15
1 Introduction
1.1 Background
Many methods have been created to measure in?uence of academic research, but the standards of
measuring the in?uence aren¡¯t objectively enough.
One of the existing methods to determine the in?uence is to conduct a network of co-author and
the properties of citations through computer databases of scienti?c papers from different researching
?elds. Many researchers have dedicated to seeking the mechanics of complex network, and models
like Generalized Random Graphs, Small-World Networks, and Scale-Free Networks are created.
The Erdos Number mentioned in the case is subordinated to the model of Small-World Networks.
Paul Erdos himself was one of the most famous academic co-authors in the 20th-century who had
over 500 co-authors and published over 1400 technical research papers.
1.2 Our Work
The development of our model is inspired by several ideas with respect to both small world mod-
eling process and core modeling process.
Firstly, we search the background information about Erdos and the network measurement meth-
ods, and know that Erdos network is related to a complex network called small world. So we decided
to build a small world model illustrating intrinsic relationship among Erdos1 authors. At the begin-
ning of analysis, we calculate some basic data like the degree distribution of the author and growth
of the number of Erdos co-authors. We de?ne the vertices as the Erdos1 authors and edges as the
co-authorship of the authors. Thus, the degree equals to number of co-authors, which shows the level
of collaboration.
Secondly, we need to measure the properties of the in?uence deeply because for example some
excellent researchers in the academic ?eld do not write paper with others frequently, but once their
paper is published, it will make a great effect on others¡¯ research. So we introduce core model to
evaluate some speci?c properties of the in?uence according to the analysis of their average of degree,
average of core and coll. Meanwhile we de?ne some persons as transformers and insiders to de?ne
the differences of their role in the network.
Thirdly, we apply our model to the measuring process of in?uence of papers. However, there is
a property that needs to be changed. Since we measure the in?uence of a paper by analyzing the
important works that follow from its publication, the out-degree is important in this task and all data
are built on it.
Fourthly, we decide to build a network model in movie ?eld and collect 20 movie actors co-
relationship information. On one hand, we need to ?nd the most in?uential actor in the network. On
the other hand, we are required to give wise decisions on how to choose an in?uential actor who can
help make success. So we also examine the utility and science of our model.
In the end, we also discuss how our model works, the advantages and disadvantages of our model
and the parts that need to be improved.
2 Assumption
Weight measures the proportion that each co-author contributes to the paper. Suppose a
researcher collaborates on the writing of a paper with n authors in total, i.e. s/he has n ? 1 coauthors
on that paper. We therefore assume that s/he is acquainted with each coauthor 1/(n ? 1) times,
on average, as if there were only one coauthor. We can imagine that the researcher divides his or
4. Team #30254 Page 2 of 15
her time equally between the (n ? 1) coauthors. This is obviously only a rough approximation. In
reality a researcher spends more time with some coauthors than with others, so the proportion each
of them contributes varies. The weight of erdos1 authors¡¯ in?uence is assumed to be the same in this
paper, since the information given is not concrete enough for us to distinguish. Other factors like
degree distribution, core number, and power ranking are taken into consideration in the process of
measurement.
The factors involved in the measurement of authors and their papers¡¯ impacts are circled in
co-author network and citation counts. Obviously, many other factors would make contribution to
the measurement. We choose to simplify the analysis and obtain an ef?cient outcome by omitting the
less detailed factors.
A problem with the obtained network is due to the non standardized writing of the author¡¯s
name, it contains several characteristics corresponding to the same author. For example: R.S.
Dremdale, Lobert L. Dremdale, Lobert L. Scot Dredale, L.L. Dremdale, S. Dremdale, L. Dremdale,
and L.L.S. Dremdale ¨C this is relatively easy to guess. Moreover, ¡°insider¡± information is needed to
know if Ottry Tchwahzkmpf and Ottry Cheng are the same person. Also, no provision is made in the
database to discern two persons with the same name. We then assume the names mentioned in the
case are exclusive to them. Each person¡¯s name is different.
3 Basic Models of Network Topology
3.1 Random Network
In mathematical terms a network is represented by a graph. A graph is a pair of sets G = {P, E}
, where P is a set of N nodes (or vertices or points) P1, P2, . . ., PN and E is a set of edges (or
links or lines) that connect two elements of P. Graphs are usually represented as a set of dots, each
corresponding to a node, two of these dots being joined by a line if the corresponding nodes are
connected.
5. Team #30254 Page 3 of 15
3.2 Small World Networks
Real-world networks have a small-world character. The random rewiring procedure of the Watts-
Strogatz model, which interpolates between a regular ring lattice and a random network without
altering the number of nodes or edges. We start with N = 20 nodes, each connected to its four
nearest neighbors. For p = 0 the original ring is unchanged; as p increases the network becomes
increasingly disordered until for p = 1 all edges are rewired randomly.
3.2.1 The Watts-Strogatz model
Watts and Strogatz (1998) proposed a one-parameter model that interpolates between an ordered
?nite dimensional lattice and a random graph. The algorithm behind the model is the following:
Characteristic path length l(p) and clustering coef?cient C(p) for the Watts-Strogatz model. The
data are normalized by the values l(0) and C(0) for a regular lattice. A logarithmic horizontal scale
resolves the rapid drop in l(p), corresponding to the onset of the small-world phenomenon. During
this drop C(p) remains almost constant, indicating that the transition to a small world is almost
undetectable at the local level.
6. Team #30254 Page 4 of 15
Start with order: Start with a ring lattice with N nodes in which every node is connected to its
?rst K neighbors (K/2 on either side). In order to have a sparse but connected network at all times,
consider N ? K ? ln(N) ? 1 .
(2) Randomize: Randomly rewire each edge of the lattice with probability p such that self-
connections and duplicate edges are excluded. This process introduces pNK/2 long-range edges
which connect nodes that otherwise would be part of different neighborhoods. By varying p one can
closely monitor the transition between order (p = 0) and randomness (p = 1).
4 Small World Model
4.1 Introduction of the Small World
SIX DEGREES OF SEPERATION: Six degree of separation is a theory which researchers claim
that relationship network of the people all over the world could be divided into six degree. An experi-
ment has been set to prove this theory. Some participators were required to send an email to a speci?c
person totally unfamiliar with them through their own relationship network and it turned out to be
amazing that the average time is approximately 6. The distance between a stranger with us is smaller
than we have ever imagined, and this phenomenon is a strong-persuaded example of the small world
model applying for the analysis of some cooperation networks.
4.2 Erdos collaboration graph
Table 1: The growth of Erdos collaboration graph
Year No.of co-authors of Erdos
1934-1944 22
1945-1955 35
1956-1966 120
1967-1977 105
1978-1988 159
1989-1999 226
2000-2014 7
From the table above, we can see the number of co-authors of Erdos every ten years and the trend
of the number.
4.2.1 Erdo?s number
The Erdo?s number nE of an author is de?ned as follows: Paul Erdo?s himself has nE = 0 ; people
who have written a joint paper with Paul Erdo?s have nE = 1 ; and their co-authors, with Erdo?s number
not yet de?ned, have nE = 2 ; etc.
4.2.2 Network of Erdos1
From the case, there are 510 researchers in the network of Erdos1, we will ?rst analyze some
basic information by building the co-author network of the Erdos1 authors.
7. Team #30254 Page 5 of 15
4.2.3 Basic Analyses
Related data are available at the URL: http://www.oakland.edu/grossmanrerdoshp.html.
These data can be represented as a graph called the Erdo?s collaboration graph ¡ª ¦Å = (V, E) .
The set of its vertices V consists of known authors with nE ¡Ü 2 , and its edges connect two authors,
if they wrote a joint paper, and at least one of them has nE ¡Ê {0, 1} ¡ª the data about collaboration
among authors with nE = 2 are not available.
Based on 510 Erdos1 authors network, some basic statistics about the number of co-authors can
be concluded as Table 2. In the Erdos1 co-author network, the smallest degree is 1 compared to the
biggest one 97. The maximizer is Abbott, Harvey Leslie. Besides, the average degree is 12.2549,
which indicates a relatively high cooperating level.
Table 2: Basic statistics on degrees in Erdos collaboration graph
nE = 1
Minimum 1
Median 7
Average degree 12.2549
Maximum 97
Maximizer Abbott, Harvey Leslie
5 Core
5.1 De?nition
The notion of a core was introduced by Seidman in 1983 [3].Let G = (V, L) be a graph. V is
the set of vertices and L is the set of lines (edges or arcs). We will denote n = |V | and m = |L|.
A subgraph Hk = (W, L|W) induced by the set W is a k-core or a core of order k iff ?v ¡Ê W:
deg H(v) ¡Ý k, and Hk is the maximum subgraph with this property. The core of maximum order is
also called the main core. The core number of vertex v is the highest order of a core that contains this
vertex.
8. Team #30254 Page 6 of 15
0, 1, 2 and 3 core
The degree deg(v) can be: in-degree, out-degree, in-degree + out-degree, . . . determining different
types of cores.
In Figure 1 an example of cores decomposition of a given graph is presented. From this ?gure we
can see the following properties of cores:
? The cores are nested:i < j ? Hj ? Hi.
? Cores are not necessarily connected subgraphs.
Since the cores can be ef?ciently determined, and also are one among few concepts that provide
us with meaningful information of large networks. We expect that different approaches to the analysis
of large networks can be built on this basis.
5.2 Case analysis
In our paper, the main task is to develop in?uence measurement to determine who in this Erdos1
network has signi?cant in?uence within the network in the consideration of the importance of the
published works that connects important researchers within Erdos1. As mentioned above, core num-
ber is related to the degree distribution of the author. The person attached to a high core number is at
a relatively central position, and this can be viewed as an aspect of the measurement of in?uence. If a
person is in a central position, he can make distribution by coauthoring with others directly or passing
his ideas to others with familiar co-authorship indirectly. On the other hand, the directly collaboration
with others should be de?ned as another aspect of the measurement, and we impose a notion ¡°coll ¡°
to illustrate an author¡¯s openness. Obviously, a person who cooperates with researchers in different
?elds rather than the same ?eld s/he is working in can have a better impact on forecasting their papers
within the whole academic network.
9. Team #30254 Page 7 of 15
Table 3: Top 15 authors according to number of co-authors
Author No.of co-authors Core
ALON, NOGA M. 97 18
RODL, VOJTECH 84 18
GRAHAM, RONALD LEWIS 84 18
BOLLOBAS, BELA 82 18
TUZA, ZSOLT 77 18
FUREDI, ZOLTAN 77 18
SPENCER, JOEL HAROLD 66 18
GYARFAS, ANDRAS 65 18
CHUNG, FAN RONG KING (GRAHAM) 62 18
PACH, JANOS 61 18
POMERANCE, CARL BERNARD 58 10
FAUDREE, RALPH JASPER, JR. 58 18
ODLYZKO, ANDREW MICHAEL 57 16
SZEMEREDI, ENDRE 57 18
KLEITMAN, DANIEL J. 57 18
We make a ranking of the 510 authors according to the number of their co-authors and pick the
top 10 to make a list as table 3. In this table, it could be concluded that Alon, Noga M collaborated
with the most authors. In other words, he is the most collaborative author in the network. Also, the
core of each person has been listed in our network analysis, and 18 turns out to the biggest core. We
name the core 18 to be the main core, which means the person in this core has the relatively stronger
in?uence in the network and the in?uence is getting weaker as the core number decreases.
From the table four, the second column of the distribution shows the number of coauthors that
each core has. Column three to nine are seven person samples showing the distribution of the cores
that they most cooperate with. The collection of these seven persons is based on the ranking of the
No. of the coauthors. The person¡ªGraham has the highest average number of core (7.44), so we can
indicate him to be the person that cooperates the widest range. On the opposite, the person¡ªFuredi
has the lowest average number, which means he is the person that doesn¡¯t cooperate so widely.
11. Team #30254 Page 9 of 15
GRAHAM 14 42 13.24 35.88 1.36
GYARFAS 15 32 15.38 39.66 1.17
GYORI 7 13 17.08 53.62 1.05
belonging to the main core and some of their characteristics are presented in this table, where deg
is the average degree of all co-authors.
deg(v) =
?
?
?
0 N(v) = ?
1
|N(v)|
¡Æ
u¡ÊN(v)
deg(u) otherwise
Table 4: Top 23 most collaborative authors
Author coll
KLEITMAN, DANIEL J. 1.46
WEST, DOUGLAS BRENT 1.42
PACH, JANOS 1.39
GRAHAM, RONALD LEWIS 1.36
BABAI, LASZLO 1.35
NESETRIL, JAROSLAV 1.35
CHUNG, FAN RONG KING (GRAHAM) 1.32
SPENCER, JOEL HAROLD 1.31
TUZA, ZSOLT 1.29
BOLLOBAS, BELA 1.28
LUCZAK, TOMASZ 1.26
BURR, STEFAN ANDRUS 1.24
ALON, NOGA M. 1.24
FUREDI, ZOLTAN 1.23
RODL, VOJTECH 1.2
FRANKL, PETER 1.2
SCHELP, RICHARD H. 1.18
GYARFAS, ANDRAS 1.17
SIMONOVITS, MIKLOS 1.15
SZEMEREDI, ENDRE 1.14
GOULD, RONALD J. 1.32
TROTTER, WILLIAM THOMAS, JR 1.08
GYORI, ERVIN 1.05
12. Team #30254 Page 10 of 15
Table 5: Top 23 most collaborative authors in main core
Author Coll
1 KLEITMAN 1.45
2 WEST 1.42
3 PACH 1.39
4 GRAHAM 1.36
5 NESETRIL 1.35
cdots cdots
19 SIMONOVITS 1.15
20 SZEMEREDI 1.14
21 GOULD 1.13
22 TROTTER 1.08
23 GYORI 1.05
Multi-characteristics have presented in table 6, from which we measure who is the most in?uential
person in terms of two aspects. First, being a member of main core represents that s/he has a great
in?uence in the whole network, making direct or indirect distribution to others. Secondly, we de?ne
two types of person of importance according to their ¡°coll¡± number in main core. The person with a
higher coll number is more collaborative, in other words, s/he does not only share work achievements
with ¡®elites¡¯(the persons in the main core) but also has many co-authors in relatively peripheral core,
and we call them ¡®transferor¡¯¡ª a person that transfers elites¡¯ great ideas to the whole academic ?eld.
Meanwhile, the person with a lower coll number tends to collaborate more often with the people
among the main core, comparatively separated from peripheral core. We call them ¡®insider¡¯. Though
insiders do not often interact with peripheral person, they have a great effect on main core so insiders
are measured signi?cant. According to Table 6, the transferors are Pach, West, Kleitman, Pach and
Graham while the insiders are Gyori, Trotter, Gould, Simonovits and Schelp. They all have huge
in?uence within the network but in different aspects.
6 Citation
In task 3, it is required to compare the signi?cance of a research paper by analyzing the important
works that follow from its publication. In other words, we measure the in?uence of ¡®quotees¡¯ by
discussing the importance of ¡®quoters¡¯ by quantifying out-degree and core number. Since the average
of out-degree and core are the properties of the ¡®neighbors¡¯ of a vertex, in this case, which equals to
quote. Through the ranking of out-degree and core number, we can get the most in?uential paper in
this network.
First, we check the references of 16 papers and found the co-relationships among them. Then
we calculate out-degrees of each paper, what¡¯s more, we drew an out-degree graph to indicate the
relationships among them so that the core ?eld can be divided.
13. Team #30254 Page 11 of 15
Table 6: average output degree, core, average core and their collaborativeness.
Average
output Average
degree core core coll
Statistical mechanics of complex networks. 1 1 1 1
Scienti?c collaboration networks: II. Shortest paths,
weighted networks, and centrality.
1 1 1 1
Identity and search in social networks. 1 1 1 1
Identifying sets of key players in a network. 0 0 0 nil
Networks, in?uence, and public opinion formation. 0 0 0 nil
On Random Graphs 1 2 2 1
On properties of a well-known graph, or, What is your Ram-
seynumber?
0 0 0 nil
Power and Centrality: A family of measures, 1 1 1 1
Social network thresholds in the diffusion of innovations, 0 0 0 nil
Collective dynamics of ¡®small-world¡¯ networks. 1.75 3 1.5 2
Models of core/periphery structures. 0 1 0 nil
The structure of scienti?c collaboration networks. 1 1 1 1
Emergence of scaling in random networks. 0.75 3 0.75 4
Navigation in a small world. Nature 1 1 1 1
The structure and function of complex networks. 0 1 0 nil
Statistical models for social networks. 0 0 0 nil
5 1
3
9
8
2
4
10
7
6
11
12
13
14
15
16
14. Team #30254 Page 12 of 15
Table 7: The name of the books according to numbers in the above table
On Random Graphs, Publicationes Mathematicae 1
On properties of a well-known graph 2
Power and Centrality: A family of measures 3
Social network thresholds in the diffusion of innovations 4
Collective dynamics of ¡®small-world¡¯ networks 5
Emergence of scaling in random networks 6
Navigation in a small world 7
Models of core/periphery structures 8
Scienti?c collaboration networks: II. Shortest paths, weighted networks, and centrality 9
The structure of scienti?c collaboration networks. 10
Statistical mechanics of complex networks 11
Identity and search in social networks 12
The structure and function of complex networks 13
Identifying sets of key players in a network 14
Networks, in?uence, and public opinion formation 15
Statistical models for social networks 16
From the table above, ?rstly, we can see that the main core is 3 core. The most in?uential papers
include collective dynamics of ¡®small-world¡¯ networks and Emergence of scaling in random network.
That is to say, these two papers have been quoted the most and have strong impacts on other papers.
Secondly, the coll of ¡°Emergence of scaling in random network¡± is the biggest, which means that this
paper has been quoted by other papers from different cores the most. Hence, it can be assumed that
the useful points are adequate and multi-angled in this paper. The ideas are advanced and inspiring in
this paper or the writing style is rigorous and easy to understand.
7 To be a more famous actor
7.1 Case background
Talent show is an obvious symbol of the modern world. It builds an arena for anyone to get into
the sight of the public. We intend to apply our model to discuss this problem, offering some advices
to the current actors to be more famous. According to our analysis mentioned above, the agent of the
actor needs to ?nd someone that s/he does not only have signi?cant in?uence but also connects the
main pool and the new pool which means s/he eagers to work with the new. Since the persons we are
discussing are current actors, we put the emphasize more on the in?uence of the existing actors rather
than the openness of the existing actors. So the concept of ¡°core¡± and ¡°coll¡± mentioned above can be
applied in this situation. ¡°Core¡± will be used to locate the target actors that the current actor is going
to work with. ¡°Core¡± measures the impact of in?uence of the target actors. ¡°Coll¡± will be used to
determine the openness of the target actors. ¡°Coll¡± is not a key element in this siyuation. A group of
comparative suitable target actors can be settled using the ¡°core¡± and deg.
In the movie acting ?eld, there is a movie actor called Kevin Bacon showing the same signi?cance
as Erdos so that he has been appreciated as the ¡®center of the movie world¡¯. Bacon has cooperated with
hundreds of actors, many of which are active in the ?eld. Since our case is aimed at helping an actor
to be more famous and in?uential, it is a feasible way for him to cooperate with the most in?uential
actor. We randomly selected 10 male actors and 10 female actors who have once cooperated with
Bacon in order to set a network among them and calculate the most in?uential actor. A group of data
15. Team #30254 Page 13 of 15
to verify this model is followed under.
7.2 Case analysis
Table 8: Actors and number of their co-actors in the main
core, total number of co-actors, average degree of all their
co-actors and their collaborativeness.
Actor Co-actor All co-actor core deg coll
Ren?ee Zellweger 6 13 6 8 1
Shawnee Smith 5 10 6 8.2 1
Liv Tyler 5 10 6 8.6 1
Rainn Wilson 4 8 6 8.75 1
Troy Garity 4 8 6 8.25 1
¡®Rose Byrne 4 8 6 6.5 1
Robert Duvall 3 6 6 7.33 1
John Hurt 3 6 6 7.33 1
James McAvoy 3 6 6 6.67 1
Laurence Belcher 3 6 6 6.67 1
Gregg Henry 3 6 6 8 1
Logan Lerman 3 6 6 6.33 1
Katherine LaNasa 3 6 6 7.33 1
Jennifer Lawrence 3 6 6 6.67 1
Ellen Page 3 6 6 8 1
Molly C. Quinn 3 6 6 9 1
Degree and average degree are two factors evolving with measurement. As the table above, Molly
C. Quinn¡¯s average of degree is the highest, which means his co-actors are more collaborative. On the
other hand, Renee Zellweger himself collaborates with most actors. So they are all in?uential persons
in this network. To give advice, we would provide two choices, ?rst of which is to cooperate with the
person with highest degree so that he can help you to make acquaintance with other actors directly,
or with the person with highest average of degree so that you can get more chances to cooperate with
actors through this person¡¯s co-actors.
8 Conclusion
8.1 Strength
? We introduce the small world model and use software Pajek to extract and analyze data in a
skilled way. Applying the small world model, we can sort up the relationship among enormous
amount of people ef?ciently. What¡¯s more, viewing the relationship among people as a net-
work has signi?cant meaning in analyzing many social problems and this model can help us to
measure person¡¯s devotion and in?uence with objective criteria.
? Also, the notion of core has been applied in our process of measuring in?uence. The bene?t
of de?ning core as criteria is that we could divide people into different cores in which they
have different in?uence. In other words, we could measure their in?uence roughly and then
16. Team #30254 Page 14 of 15
we analyze their speci?c in?uence according to their own data properties step by step. The
core model enables us to make a more detailed, multi-angled and objective conclusion since the
¡®in?uence¡¯ can be shown in many aspects.
8.2 Weakness
? When we measure the degree using the program Pajek, if two coauthors work together to write
a paper, the program count one time of degree to each author (2 in total), but it should only be
counted once. This results in a phenomenon that a person¡¯s really number of coathors is less
than the number the program (Pajek) provides which is in the form of the degree of a author.
? Though we have built some models to measure the in?uence of the network, we still don¡¯t have
suf?cient data to support and prove our models. The validity of the models we have conducted
still needs to be veri?ed.
8.3 Sensitivity Analysis
? When applying the model to the real network, it is very dif?cult to guarantee the data is pure.
Data can be affected by many factors like personal variance, region, religion, and the process of
the collection. Also the model itself is often too idealistic. In the reality, the number of factors
that can in?uence the impact of the paper are large, while the model we build only take limited
factors into consideration. When the model applies to the real world, it is very likely that the
model can¡¯t suit the actual situation.
8.4 Power of Modeling
First of all, measuring individual¡¯s in?uence used to, more or less, be subjective. However, when
the measuring system could be a quanti?ed model, many standards will be set. The comments or
opinions following with the judgers¡¯ own feelings would be out of consideration. Secondly, it provides
a more suf?cient approach to deal with the problems in which there are many complex factors. For
instance, it is popular to divide the relationship within the social network into 6 degrees. If you want
to be acquaintant with a celebrity, the model could help analyze a shortest and the most feasible way
based on your current relationship network. So we can see the bene?ts of model from the above
example. Models can solve some problems by setting up quanti?ed structures to make the problems
be more intuitive.
17. Team #30254 Page 15 of 15
References
[1] Vladimir Batagelj*, Andrej Mrvar, Some analyses of Erdo¡¯s collaboration graph, Social Networks
22 2000. 173¨C186.
[2] Batagelj, V., Mrvar, A., 1998. Pajek ¡ª a program for large network analysis. Connections 21 2 ,
47¨C57.
[3] M. E. J. Newman, The structure and function of complex networks, SIAM Rev.,45(2), 167¨C256.
(90 pages).
[4] Vladimir Batagelj, Matjaz Zaversnik, An O(m) Algorithm for Cores Decomposition of Networks
, 2003.
[5] Batagelj, V. and Mrvar, A. (1998) Pajek -A Program for Large Network Analysis,Connections,
21 (2), 47-57.
[6] Paul Edelman & Tracey George, Six Degrees of Cass Sunstein: Collaboration Networks in Legal
Scholarship, Green Bag 2d, Vol. 11, No. 1, p. 19, 2007
[7] Batagelj, V. and Mrvar, A. (2000) Some Analyses of Erdo¡¯s Collaboration Graph,Social Net-
works, 22, 173-186