This document discusses approaches to qualitative data analysis. It covers topics such as the lack of a single correct approach and the interpretive nature of qualitative analysis. It also discusses transcribing interviews, thick description, reflexivity, respondent validation, ethics, and computer assisted qualitative data analysis software. The main points are that qualitative analysis requires interpretation, commences early in the research process, and involves an ongoing iterative process between data collection and analysis.
Visual guide to qualitative analysis with QuirkosDaniel Turner
油
際際滷s to be adapted for teaching a qualitative text analysis curriculum with Quirkos in a classroom or lab setting.
For more information, and to download these slides, visit http://www.quirkos.com
Airport Wings Private Limited is a Leading Aviation Training & Manpower Service Provider to our reputed clients operating from Delhi IGI Airport which includes Domestic/International Airlines, Ground Handling Companies, Retails Outlets and Lounges. Wings is a government registered company under the Company Act 1956 with ISO Certification 9001-2008.
A presentation to the UC Berkeley D-Lab on the basics of using CAQDAS software for qualitative analysis, plus an introductory walkthrough of the features of Atlas.ti.
Choosing the right software for your research study : an overview of leading ...Merlien Institute
油
Choosing the right software for your research study : an overview of leading CAQDAS packages by Christina Silver. This presentation is part of the proceedings of the International workshop on Computer-Aided Qualitative Research organised by Merlien Institute. This workshop was held on the 4-5 June in Utrecht, The Netherlands
During this webinar, Dr. Lani will discuss qualitative analyses for dissertation Chapter 4. Special emphasis will be given to Phenomenological, Case study, and Grounded theory approaches.
The document discusses qualitative research methods and analysis. It describes common qualitative data collection techniques like interviews and observation. It explains that qualitative research aims to understand meaning, context, processes, and reasoning from the participant's perspective. The document contrasts qualitative and quantitative approaches, noting that qualitative research relies on words rather than numbers and uses inductive rather than deductive reasoning. It also outlines common techniques for analyzing qualitative data, including open coding, systematic coding, and affinity diagramming.
Illustrated Code: Building Software in a Literate Way
Andreas Zeller, CISPA Helmholtz Center for Information Security
Notebooks rich, interactive documents that join together code, documentation, and outputs are all the rage with data scientists. But can they be used for actual software development? In this talk, I share experiences from authoring two interactive textbooks fuzzingbook.org and debuggingbook.org and show how notebooks not only serve for exploring and explaining code and data, but also how they can be used as software modules, integrating self-checking documentation, tests, and tutorials all in one place. The resulting software focuses on the essential, is well-documented, highly maintainable, easily extensible, and has a much higher shelf life than the "duct tape and wire prototypes frequently found in research and beyond.
This is a North Central University paper about analyzing qualitative software. It is written in APA format, includes references, and is graded an instructor.
Group X analyzed data using computer software. They discussed several types of software for analyzing qualitative data, including those for coding text, developing theories, and building conceptual networks. The functions to look for include coding, memoing, searching, and displaying data. There is no single best software; the researcher must consider their data, approach, and needs. The document provided examples of research articles that used different software like MS Word, NVivo, and Qualrus to analyze qualitative data.
Formation au logiciel NVivo d'analyse de donn辿es qualitativesval辿ry ridde
油
Le 20 mars dernier, la Chaire REALISME organisait l'IRSPUM une formation donn辿e par Pierre Lef竪vre, sociologue du D辿partement de Sant辿 Publique de l'Institut de M辿decine Tropicale d'Annvers, pour les 辿tudiants sur l'utilisation du logiciel d'analyse de donn辿es qualitatives NVivo.
The STAT technical report provides an introduction to the Stat project, which aims to develop an open source machine learning framework in Java called Stat for text analysis. Stat focuses on facilitating common textual data analysis tasks for researchers and engineers. The report outlines the background, motivation, scope, and stakeholders of the project. It also describes an initial survey conducted to understand potential users and their needs in order to prioritize the framework's design and implementation. Finally, the report analyzes two existing toolkits, Weka and MinorThird, and discusses their strengths and limitations for text analysis tasks.
Modular Documentation Joe Gelb Techshoret 2009Suite Solutions
油
Designing, building and maintaining a coherent content model is critical to proper planning, creation, management and delivery of documentation and training content. This is especially true when implementing a modular or topic-based XML standard such as DITA, SCORM and S1000D, and is essential for successfully facilitating content reuse, multi-purpose conditional publishing and user-driven content.
During this presentation we will review basic concepts and methods for implementing information architecture. We will then introduce an innovative, comprehensive methodology for information modeling and content development that employs recognized XML standards for representation and interchange of knowledge, such as Topic Maps and SKOS. In this way, semantic technologies designed for taxonomy and ontology development can be brought to bear for creating and managing technical documentation and training content, and ultimately impacting the usability and findability of technical information.
Data analysis using computers for presentationNoonapau
油
The document discusses using computer software for data analysis. It provides examples of different types of software including word processors, code-and-retrieve programs, and conceptual network builders. It emphasizes that the researcher should choose software based on their methodology and the type and amount of data, rather than which software is considered "best." The document also summarizes several research articles that used different software programs like MS Word, NVivo, and Qualrus to analyze qualitative data.
The document discusses the Total Data Science Process (TDSP) which aims to integrate DevOps practices into the data science workflow to improve collaboration, quality, and productivity. The TDSP provides standardized components like a data science lifecycle, project templates and roles, reusable utilities, and shared infrastructure to help address common challenges around organization, collaboration, quality control, and knowledge sharing for data science teams. It describes the various TDSP components that standardize the data science process and ease challenges around the data science solutions development lifecycle.
Using qualitative software in policy researchstars_toto
油
This document discusses the benefits of using qualitative software for policy research. It summarizes that qualitative software can help organize data, allow researchers to focus on analysis rather than organization, support systematic and transparent research, and explore or process large amounts of data. Specific software packages mentioned include Atlas.ti and NVivo, which support tasks like project management, coding, retrieving coded segments, and mapping ideas. The document also notes that qualitative software does not do the analysis for the researcher and is a flexible tool for both inductive and deductive approaches.
This document provides an overview and requirements for the Stat project, an open source machine learning framework for text analysis. It describes the background, motivation, scope, and stakeholders of the project. Key requirements for the framework include being simplified, reusable, and providing built-in capabilities to naturally support text representation and processing tasks.
This poster presents guidelines for researchers to improve reproducibility in scientific research by better documenting the key entities of research: data, software, workflow, and research output. It recommends documenting data sources and processing steps, writing descriptive code with examples, and using tools like Docker, Jupyter notebooks, LaTeX, and data repositories to capture the experimental environment and research process. Following these guidelines helps researchers communicate and verify their work, allowing others to build on their research findings.
The most hated thing a developer can imagine is writing documentation. But on the other hand nothing can compare with a well sorted documentation, in case you want to change or extend something or just want to get into the topic again. We all know, there is no major way how to do documentation, but there a number of principles and todos which makes it much easier for you. This talk is not about tools, like phpDocumentor, nor is it about promoting a special way of documentation. It is about some of the thoughts you should have gone through, before and when writing documentation.
Java programming - solving problems with softwareSon Nguyen
油
This document provides an overview of a course on Java programming that teaches students how to solve problems through writing Java programs. The course covers basic Java concepts like control flow and object-oriented programming. It also introduces custom classes that allow students to work with images, websites, and CSV files. By the end of the course, students will be able to write, compile, debug, and develop Java programs to solve various problems. The course consists of multiple modules that teach skills like using strings, CSV files, and basic statistics to manipulate and analyze data in Java programs. It concludes with a mini project involving analyzing baby name popularity data over time.
The document discusses setting up a digital library using DSpace to digitize and share the college's resources. DSpace is presented as a solution to questions around how to provide access to materials like previous projects, seminars, journals, teaching materials, question banks, cultural events, and more. It allows different file formats to be captured and distributed online with search capabilities. Implementing DSpace involves hardware, software, and support from HP to build the repository and allow students and faculty to access educational resources digitally.
Jelita Asian is an experienced software developer, researcher, and lecturer with over 15 years of experience. She has worked as a lecturer at Surya University since 2011, teaching programming, data structures, algorithms, and other computer science courses. Prior to that, she held various software engineering roles in Australia and Singapore, developing applications using languages such as Java, C++, Python and more. She received her PhD in Computer Science from RMIT University in 2007.
Words Doctorate is providing PhD Thesis and Research related support for PhD Students-all stream.
We are providing completed solution for PhD Candidate :-
- Synopsis
- Thesis,
- Research Proposal,
- Research Paper,
- Research Paper Published in Reputed International General
- Software Based Project implementation.
- PhD Presentation.
The Nuxeo Way: leveraging open source to build a world-class ECM platformNuxeo
油
How can one create and deliver enterprise-class software, worth tens of years of R&D, with minimal capital investment? Open source can help, as well as the right context and ecosystem. This first talk will highlight the experience gained in the 8 first years of Nuxeo, and how they were applied to the latest iteration of the Nuxeo Platform.
The document discusses using computer software to analyze qualitative data, describing different types of analysis software and their functions. It also provides examples of research studies that used various computer-assisted qualitative data analysis software packages like MS Word, NVivo, and NUD*IST to code and analyze interview transcripts, field notes, and other qualitative data sources. The document emphasizes that the choice of software depends on the researcher's methodology, data types and amount, and analysis approach.
Scala is a programming language that combines object-oriented and functional programming which allows developers to write more functionality with less code compared to Java. It compiles to Java bytecode and runs on the Java Virtual Machine, allowing reuse of existing Java libraries and tools. Adopting Scala can increase developer productivity and quality by reducing code size by 30-50%, shortening development time and reducing bugs.
This document provides an introduction and overview of the Stat project, which aims to create an open source machine learning framework in Java for text analysis. The Stat framework is designed to be simple, extensible, and performant. It aims to simplify common text analysis tasks for researchers and engineers by providing reusable tools and wrappers for existing NLP and machine learning packages. The document outlines the goals, scope, stakeholders and provides an initial requirements analysis for the Stat framework.
This document provides guidance on sound scholarship and academic writing. It explains that scholarly writing should tell a single story arc that establishes the research question's significance, develops the methodology and findings, and discusses implications. It emphasizes establishing warrant and logic for all arguments, using evidence from literature and one's own research. The document also addresses topics like maintaining an objective tone, integrating quotations, referencing sources, and writing in an academic style.
These are the slides from a teaching session I ran to get our doctoral students thinking a bit more critically about the nature of technology in Higher Education. (Note, it's deliberately controversial in places)
More Related Content
Similar to Computer Aided Qualitaitive Data Analysis Software (20)
Illustrated Code: Building Software in a Literate Way
Andreas Zeller, CISPA Helmholtz Center for Information Security
Notebooks rich, interactive documents that join together code, documentation, and outputs are all the rage with data scientists. But can they be used for actual software development? In this talk, I share experiences from authoring two interactive textbooks fuzzingbook.org and debuggingbook.org and show how notebooks not only serve for exploring and explaining code and data, but also how they can be used as software modules, integrating self-checking documentation, tests, and tutorials all in one place. The resulting software focuses on the essential, is well-documented, highly maintainable, easily extensible, and has a much higher shelf life than the "duct tape and wire prototypes frequently found in research and beyond.
This is a North Central University paper about analyzing qualitative software. It is written in APA format, includes references, and is graded an instructor.
Group X analyzed data using computer software. They discussed several types of software for analyzing qualitative data, including those for coding text, developing theories, and building conceptual networks. The functions to look for include coding, memoing, searching, and displaying data. There is no single best software; the researcher must consider their data, approach, and needs. The document provided examples of research articles that used different software like MS Word, NVivo, and Qualrus to analyze qualitative data.
Formation au logiciel NVivo d'analyse de donn辿es qualitativesval辿ry ridde
油
Le 20 mars dernier, la Chaire REALISME organisait l'IRSPUM une formation donn辿e par Pierre Lef竪vre, sociologue du D辿partement de Sant辿 Publique de l'Institut de M辿decine Tropicale d'Annvers, pour les 辿tudiants sur l'utilisation du logiciel d'analyse de donn辿es qualitatives NVivo.
The STAT technical report provides an introduction to the Stat project, which aims to develop an open source machine learning framework in Java called Stat for text analysis. Stat focuses on facilitating common textual data analysis tasks for researchers and engineers. The report outlines the background, motivation, scope, and stakeholders of the project. It also describes an initial survey conducted to understand potential users and their needs in order to prioritize the framework's design and implementation. Finally, the report analyzes two existing toolkits, Weka and MinorThird, and discusses their strengths and limitations for text analysis tasks.
Modular Documentation Joe Gelb Techshoret 2009Suite Solutions
油
Designing, building and maintaining a coherent content model is critical to proper planning, creation, management and delivery of documentation and training content. This is especially true when implementing a modular or topic-based XML standard such as DITA, SCORM and S1000D, and is essential for successfully facilitating content reuse, multi-purpose conditional publishing and user-driven content.
During this presentation we will review basic concepts and methods for implementing information architecture. We will then introduce an innovative, comprehensive methodology for information modeling and content development that employs recognized XML standards for representation and interchange of knowledge, such as Topic Maps and SKOS. In this way, semantic technologies designed for taxonomy and ontology development can be brought to bear for creating and managing technical documentation and training content, and ultimately impacting the usability and findability of technical information.
Data analysis using computers for presentationNoonapau
油
The document discusses using computer software for data analysis. It provides examples of different types of software including word processors, code-and-retrieve programs, and conceptual network builders. It emphasizes that the researcher should choose software based on their methodology and the type and amount of data, rather than which software is considered "best." The document also summarizes several research articles that used different software programs like MS Word, NVivo, and Qualrus to analyze qualitative data.
The document discusses the Total Data Science Process (TDSP) which aims to integrate DevOps practices into the data science workflow to improve collaboration, quality, and productivity. The TDSP provides standardized components like a data science lifecycle, project templates and roles, reusable utilities, and shared infrastructure to help address common challenges around organization, collaboration, quality control, and knowledge sharing for data science teams. It describes the various TDSP components that standardize the data science process and ease challenges around the data science solutions development lifecycle.
Using qualitative software in policy researchstars_toto
油
This document discusses the benefits of using qualitative software for policy research. It summarizes that qualitative software can help organize data, allow researchers to focus on analysis rather than organization, support systematic and transparent research, and explore or process large amounts of data. Specific software packages mentioned include Atlas.ti and NVivo, which support tasks like project management, coding, retrieving coded segments, and mapping ideas. The document also notes that qualitative software does not do the analysis for the researcher and is a flexible tool for both inductive and deductive approaches.
This document provides an overview and requirements for the Stat project, an open source machine learning framework for text analysis. It describes the background, motivation, scope, and stakeholders of the project. Key requirements for the framework include being simplified, reusable, and providing built-in capabilities to naturally support text representation and processing tasks.
This poster presents guidelines for researchers to improve reproducibility in scientific research by better documenting the key entities of research: data, software, workflow, and research output. It recommends documenting data sources and processing steps, writing descriptive code with examples, and using tools like Docker, Jupyter notebooks, LaTeX, and data repositories to capture the experimental environment and research process. Following these guidelines helps researchers communicate and verify their work, allowing others to build on their research findings.
The most hated thing a developer can imagine is writing documentation. But on the other hand nothing can compare with a well sorted documentation, in case you want to change or extend something or just want to get into the topic again. We all know, there is no major way how to do documentation, but there a number of principles and todos which makes it much easier for you. This talk is not about tools, like phpDocumentor, nor is it about promoting a special way of documentation. It is about some of the thoughts you should have gone through, before and when writing documentation.
Java programming - solving problems with softwareSon Nguyen
油
This document provides an overview of a course on Java programming that teaches students how to solve problems through writing Java programs. The course covers basic Java concepts like control flow and object-oriented programming. It also introduces custom classes that allow students to work with images, websites, and CSV files. By the end of the course, students will be able to write, compile, debug, and develop Java programs to solve various problems. The course consists of multiple modules that teach skills like using strings, CSV files, and basic statistics to manipulate and analyze data in Java programs. It concludes with a mini project involving analyzing baby name popularity data over time.
The document discusses setting up a digital library using DSpace to digitize and share the college's resources. DSpace is presented as a solution to questions around how to provide access to materials like previous projects, seminars, journals, teaching materials, question banks, cultural events, and more. It allows different file formats to be captured and distributed online with search capabilities. Implementing DSpace involves hardware, software, and support from HP to build the repository and allow students and faculty to access educational resources digitally.
Jelita Asian is an experienced software developer, researcher, and lecturer with over 15 years of experience. She has worked as a lecturer at Surya University since 2011, teaching programming, data structures, algorithms, and other computer science courses. Prior to that, she held various software engineering roles in Australia and Singapore, developing applications using languages such as Java, C++, Python and more. She received her PhD in Computer Science from RMIT University in 2007.
Words Doctorate is providing PhD Thesis and Research related support for PhD Students-all stream.
We are providing completed solution for PhD Candidate :-
- Synopsis
- Thesis,
- Research Proposal,
- Research Paper,
- Research Paper Published in Reputed International General
- Software Based Project implementation.
- PhD Presentation.
The Nuxeo Way: leveraging open source to build a world-class ECM platformNuxeo
油
How can one create and deliver enterprise-class software, worth tens of years of R&D, with minimal capital investment? Open source can help, as well as the right context and ecosystem. This first talk will highlight the experience gained in the 8 first years of Nuxeo, and how they were applied to the latest iteration of the Nuxeo Platform.
The document discusses using computer software to analyze qualitative data, describing different types of analysis software and their functions. It also provides examples of research studies that used various computer-assisted qualitative data analysis software packages like MS Word, NVivo, and NUD*IST to code and analyze interview transcripts, field notes, and other qualitative data sources. The document emphasizes that the choice of software depends on the researcher's methodology, data types and amount, and analysis approach.
Scala is a programming language that combines object-oriented and functional programming which allows developers to write more functionality with less code compared to Java. It compiles to Java bytecode and runs on the Java Virtual Machine, allowing reuse of existing Java libraries and tools. Adopting Scala can increase developer productivity and quality by reducing code size by 30-50%, shortening development time and reducing bugs.
This document provides an introduction and overview of the Stat project, which aims to create an open source machine learning framework in Java for text analysis. The Stat framework is designed to be simple, extensible, and performant. It aims to simplify common text analysis tasks for researchers and engineers by providing reusable tools and wrappers for existing NLP and machine learning packages. The document outlines the goals, scope, stakeholders and provides an initial requirements analysis for the Stat framework.
This document provides guidance on sound scholarship and academic writing. It explains that scholarly writing should tell a single story arc that establishes the research question's significance, develops the methodology and findings, and discusses implications. It emphasizes establishing warrant and logic for all arguments, using evidence from literature and one's own research. The document also addresses topics like maintaining an objective tone, integrating quotations, referencing sources, and writing in an academic style.
These are the slides from a teaching session I ran to get our doctoral students thinking a bit more critically about the nature of technology in Higher Education. (Note, it's deliberately controversial in places)
The document provides information about using the Lincoln Repository as an open access repository for academic work. It explains that authors retain licensing rights for their work, outlines different license options through Creative Commons, and notes that publisher policies should be checked to ensure compliance. The benefits of open access through the repository are discussed as increasing impact and profile, aligning with social justice principles of fairness, and potentially gaining more citations to help build research networks.
E-portfolios provide pedagogical value by allowing learners to collect work in one digital place, select artifacts to showcase with reflective commentary, and easily share their portfolio. Software tools like Pebble Pad and Mahara allow portfolios to be taken when students leave an institution, while Blackboard does not. The future of e-portfolios is believed to involve further expansion and use based on a study respondent's one-word answer of "More".
1) The presentation discusses upcoming changes to Blackboard for the new academic year, including that the gradebook will now be called the "Grade Centre" and subject sites will no longer roll over.
2) Blackboard sites are populated overnight based on student enrollment data from the university's student information system (QLS), including at the module, award, and subject levels.
3) Students will only see sites that teaching staff have made available, even if students are enrolled in relevant modules in QLS. Problems can occur if this availability is not set correctly.
A presentation given at the University of Lincoln (UK) Teaching Symposium on the potential value of e-portfolios in continuing professional development, and personal development planning
2. Qualitative data Qualitative data includes text, visual,and multimedia information. Background information Primary data Secondary data Relevant supporting information All of this data has to be managed and some of it has to be analysed.
3. Software typology Text retrievers/Textbase managers Code and retrieve packages/theory builders/conceptual network builders Based on Lewis & Silver, (2007) Using software in Qualitative Research. - Sage
4. Code based theory builders Enable thematic coding of chunks of data Allow reduction of data along thematic lines Can search text, and codes Can record relationships between issues concepts and themes Can develop more detailed codes where certain conditions combine in the data.
5. Text based tools Essentially allow sophisticated searching of text and language Use thesauri to find words with similar meaning Provide word frequency tables Provide easy KWIC retrieval Aimed at researchers who are focused on textual analysis
6. Implications of using software Data preparation is very important. Not all data is computer readable! Not necessary to use every feature available in a software package Most functional not necessarily the best for you
7. Structure of a CAQDAS package All your files are contained in a Project All packages maintain a database of your files. Most packages designed to handle text which means that the format of imported text matters. Some packages can handle multimedia.
8. Benefits Researcher is much closer to data Easy to explore data. Many packages allow annotations of text Text can be coded multiple codes can be applied to one piece of text
9. Writing and output Many CAQDAS packages offer writing tools. Many offer different ways to sort and retrieve content. Most offer output to Word, Excel or SPSS Many offer graphic representations of your coding schema.
10. Working with a CADQAS package Know your methodological approach first! Dont expect it to work for you from the start Keep one project for each research project. Do thoroughly familiarise yourself with writing and memo tools Come out of the software when you need
11. Some packages Nvivo (Very expensive, but market leader) The Ethnograph (Slightly cheaper, and well featured) Weft (Free, but very limited functionality) There are others but we dont have time to go through them.
12. Nvivo Off campus you need to buy a personal license (贈130 for a student 12 month license. A full license is 贈330.00 ) http://www.qsrinternational.com/ Available to students on campus only
13. The Ethnograph Full license is ($299.00) http://www.qualisresearch.com/ Not supported by the university Student license available for $99.00
14. Weft Very limited functionality essentially coding, retrieval and memo only. http://www.pressure.to/qda/ Completely free!
15. And the oddity! Zotero Helps you collect manage and cite your resources. (Not ideal for data) An extension to the Firefox internet browser. Free. http://www.zotero.org/ (But, you MUST be using Firefox)
Editor's Notes
#2: Understand this is not a training session in any specific package. There are lots of them out there though, although it would be invidious to recommend one over another. It really does depend on what you want to do.
#3: Background information includes things like project briefs, notes from tutorials, discussions etc. Primary data is interview transcripts, open ended questionnaire responses, field notes etc. Secondary data might be official documents or newspaper articles, and relevant supporting information might be literature, supporting quantitiative evidence, relevant web sites and media coverage. Some of this will need to be managed. (Identify and record documents, (e.g.interview transcripts, images) and tag them with metadata (author, date, location, subjects etc.) Some will need deeper analysis (Searched for themes, codes, processes, contexts that appear in the literature, your theoretical approach to your topic, or emergent themes). A CAQDAS package will help you do both these things, and keep all your data in one place.
#4: Roughly two types. Im not going to talk much about the first, because that is a rather specialised form of quantitative research, and anyhow the distinction between them is becoming very blurred. Most packages in the second group can now do both things quite well.
#5: A code based theory builder enables you to take any piece of data and give it one or more code. This might be a single word or phrase, or a whole chapter. The idea is to break your data into manageable groups. Characteristically qualitative data is non linear so you identify themes and mark bits of text with them. (codes). Of course, when you come to write up your work, youll want to know what your participants said about themes, so you can search for and print out a report of all the text coded under a theme. More sophisticated packages allow you to identify relationships between codes. You might notice that all your female participants have strong views about a particular theme so theres a relationship between gender and whatever that theme is. The software will allow you to make a note of that. Finally you can develop detailed codes for instance occasions where codes appear in close proximity to each other in the data is there some significance to that. Well, thats for you to decide but the software can point it out for you!
#6: This is a very specialist form of analysis, and to be honest, not really my field. I would argue that this is closer to quantitative research, but if your methodology, then these are the features you should be looking for.
#7: Most packages limit the types of data files you can import, or at least, that you can work on. Its not for example usually a good idea to use Word documents as they are. I convert them to plain text as a rule, but if you really need to keep the formatting, RTF is a better bet. You also have to be prepared to summarise non readable data. You might just type a summary or a few bullet points, but, if you do that, have some sort of box, or file and use some sort of numbering system to make it easy for you to retrieve the original. Now. Most CAQDAS packages offer far more functionality than you will ever need. Do not feel you have to use every feature on the menu. I did use Nvivo, but I suspect I only used about 25% of its features. Decide what you want to do. If you just want to code, then you dont need a complex relationship builder. But do look for a writing tool, which I will come back to later.
#8: maintain a database of your files, either importing them to an internal database or connecting to external files. handle text which means that the format of imported text matters Usually wise to import text as plain text. Some of the free packages can only do this. Multimedia is usually segmented by
#9: Youre close because it is always easily accessible and in the same place. Many packages allow you to make marginal notes, or open different viewing panes so you can see the data
#10: you can write memos about why you chose certain codes. (Some packages have a special code book function.) Or about anything else
#11: But be prepared to modify it. If youre using one of the more sophisticated tools youll find that the software takes time to get to know and to match your methodological approaches. one project for each research project (Not one for each respondent, case, etc. The software will manage this for you) The writing tools are really helpful in retaining insights and remind you of small impromptu action plans that might work for you later. There is something to be said for going back to basics. Print out a coded data report and sit down with a highlighter pen to deconstruct it. Print out tables of codes to see if theres anything missing
#16: Not strictly speaking a CADQAS package. May be useful for those starting out in research