Stringing beads: from tool combinations to workflows

[update 20170820: the interactive online table now includes the 7 most often mentioned ‘other’ tools for each question, next to the 7 preset choices. See also heatmap, values and calculations for this dataset]

With the data from our global survey of scholarly communication tool usage, we want to work towards identifying and characterizing full research workflows (from discovery to assessment).

Previously, we explained the methodology we used to assess which tool combinations occur together in research workflows more often than would be expected by chance. How can the results (heatmap, values and calculations) be used to identify real-life research workflows? Which tools really love each other, and what does that mean for the way researchers (can) work?

Comparing co-occurences for different tools/platforms
First of all, it is interesting to compare the sets of tools that are specifically used together (or not used together) with different tools/platforms. To make this easier, we have constructed an interactive online table (http://tinyurl.com/toolcombinations, with a colour-blind safe version available at http://tinyurl.com/toolcombinations-cb) that allows anyone to select a specific tool and see those combinations. For instance, comparing tools specifically used by people publishing in journal from open access publishers vs. traditional publishers (Figure 1,2) reveals interesting patterns.

For example, while publishing in open access journals is correlated with the use of several repositories and preprint servers (institutional repositories, PubMedCentral and bioRxiv, specifically), publishing in traditional journals is not. The one exception here is sharing publications through ResearchGate, an activity that seems to be positively correlated with publishing regardless of venue….

Another interesting finding is that while both people who publish in open access and traditional journals specifically use the impact factor and Web of Science to measure impact (again, this may be correlated with the activity of publishing, regardless of venue), altmetrics tools/platforms are used specifically by people publishing in open access journals. There is even a negative correlation between the use of Altmetric and ImpactStory and publishing in traditional journals.

Such results can also be interesting for tool/platform providers, as it provides them with information on other tools/platforms their users employ. In addition to the data on tools specifically used together, providers could also use absolute numbers on tool usage to identify tools that are popular, but not specifically used with their own tool/platform (yet). This could identify opportunities to improve interoperability and integration of their own tool with other tools/platforms. All data are of course fully open and available for any party to analyze and use.

tool-combinations-079-topical-journal-oa-publisher

Figure 1. Tool combinations – Topical journal (Open Access publisher)

Tool combinations 078 - Topical journal Trad publisher.jpg

Figure 2. Tool combinations – Topical journal (traditional publisher)

Towards identifying workflows: clusters and cliques
The examples above show that, although we only analyzed combinations of any two tools/platforms so far, these data already bring to light some interesting differences between research workflows. There are several possibilities to extend this analysis  from separate tool combinations into groups of tools typifying full research workflows. Two of these possibilities are looking at clusters and cliques, respectively.

1. Clusters: tools occurring in similar workflows
Based on our co-occurrence data, we can look at which tools occur in similar workflows, i.e. have the most tools in common that they are or are not specifically used with. This can be done in R using a clustering analysis script provided by Bastian Greshake (see GitHub repo with code, source data and output). When run with our co-occurrence data, the script basically sorts the original heatmap with green and red cells by placing tools that have a similar pattern of correlation with other tools closer together (Figure 3). The tree structure on both sides of the diagram indicates the hierarchy of tools that are most similar in this respect.

survey_heatmap_p-values_2-tailed_coded_RG_white_AB.png

Figure 3. Cluster analysis of tool usage across workflows (click on image for larger version). Blue squares A and B indicate clusters highlighted in Figure 4. A color-blind safe version of this figure can be found here.

Although the similarities (indicated by the length of the branches in the hierarchy tree, with shorter lengths signifying closer resemblance) are not that strong, still some clusters can be identified. For example, one cluster contains popular, mostly traditional tools (Figure 4A) and another cluster contains mostly innovative/experimental tools, that apparently occur in similar workflows together. (Figure 4B).

clusters_examples

Figure 4. Two examples of clusters of tools (both clusters are highlighted in blue in Figure 3).

2. Cliques: tools that are linked together as a group
Another approach to defining workflows is to identify groups of tools that are all specifically used with *all* other tools in that group. In network theory, such groups are called ‘cliques’. Luckily, there is a good R-library (igraph) for identifying cliques from co-occurrence data. Using this library (see GitHub repo with code, source data and output) we found that the largest cliques in our set of tools consist of 17 tools . We identified 8 of these cliques, which are partially overlapping. In total, there are over 3000 ‘maximal cliques’ (cliques that cannot be enlarged) in our dataset of 119 preset tools, varying in size from 3 tot 17 tools. So there is lots to analyze!

An example of one of the largest cliques is shown in Figure 5. This example shows a workflow with mostly modern and innovative tools, with an emphasis on open science (collaborative writing, sharing data, publishing open access, measuring broader impact with altmetrics tools), but surprisingly, these tools are apparently also all used together with the more traditional ResearcherID. A hypothetical explanation might be that this represents the workflow of a subset of people actively aware of and involved in scholarly communication, who started using ResearcherID when there was not much else, still have that, but now combine it with many other, more modern tools.

cliques-example_colors

Figure 5. Example of a clique: tools that all specifically co-occur with each other

Clusters and cliques: not the same
It’s important to realize the difference between the two approaches described above. While the clustering algorithm considers similarity in patterns of co-occurrences between tools, the clique approach identifies closely linked groups of tools, that can, however, each also co-occur with other tools in workflows.

In other words, tools/platform that are clustered together occur in similar workflows, but do not necessarily all specifically occur together (see the presence of white and red squares in Figure 4A,B). Conversely, tools that do all specifically occur together, and thus form a clique, can appear in different clusters, as each can have a different pattern of co-occurrences with other tools (compare Figures 3/5).

In addition, it is worth noting that these approaches to identifying workflows are based on statistical analysis of aggregated data – thus, clusters or cliques do not necessarily have an exact match with individual workflows of survey respondents.Thus we are not describing actual observed patterns, but are inferring patterns based on observed strong correlations of pairs of tools/platforms.

Characterizing workflows further – next steps
Our current analyses of tool combinations and workflows are based on survey answers from all participants, for the 119 preset tools in our survey. We would like to extend these analyses to include tools most often mentioned by participants as ‘others’. We also want to focus on differences and similarities of workflows of specific subgroups (e.g. different disciplines, research roles and/our countries). The demographic variables in our public dataset (on Zenodo or Kaggle) allow for such breakdowns, but it would require coding an R script to generate the co-occurrence probabilities for different subgroups. And finally, we can add variables to the tools, for instance , classifying which tools support open research practices and which don’t. This then allows us to investigate to which extent full Open Science workflows are not only theoretically possible, but already put into practice by researchers.

See also our short video, added below:

header image: Turquoise Beads, Circe Denyer, CC0, PublicDomainPictures.net

Tools that love to be together

[updates in brackets below]
[see also follow-up post: Stringing beads: from tool combinations to workflows]

Our survey data analyses so far have focused on tool usage for specific research activities (e.g. GitHub and others: data sharing, Who is using altmetrics tools, The number games). As a next step, we want to explore which tool combinations occur together in research workflows more often than would be expected by chance. This will also facilitate identification of full research workflows, and subsequent empirical testing of our hypothetical workflows against reality.

Checking which tools occur together more often than expected by chance is not as simple  as looking which tools are most often mentioned together. For example, even if two tools are not used by many people, they might still occur together in people’s workflows more often than expected based on their relatively low overall usage. Conversely, take two tools that each are used by many people: stochastically, a sizable proportion of those people will be shown to use both of them, but this might still be due to chance alone.

Thus, to determine whether the number of people that use two tools together is significantly higher than can be expected by chance, we have to look at the expected co-use of these tools given the number of people that use either of them. This can be compared to the classic example in statistics of taking colored balls out of an urn without replacement: if an urn contains 100 balls (= the population) of which 60 are red (= people in that population who use tool A), and from these 100 balls a sample of 10 balls is taken (= people in the population who use tool B), how many of these 10 balls would be red (=people who use both tool A and B)? This will vary with each try, of course, but when you repeat the experiment many times, the most frequently occurring number of red balls in the sample will be 6. The stochastic distribution in this situation is the hypergeometric distribution.

memrise-heatmap

Figure 1. Source: Memrise

For any possible number x of red balls in the sample (i.e. 1-10), the probability of result x occurring at any given try can be calculated with the hypergeometric probability function. The cumulative hypergeometric probability function gives the probability that the number of red balls in the sample is x or higher. This probability is the p-value of the hypergeometric test (identical to the one-tailed Fisher test), and can be used to assess whether an observed result (e.g. 9 red balls in the sample) is significantly higher than expected by chance. In a single experiment as described above, a p-value of less than 0.05 is commonly considered significant.

In our example, the probability of getting at least 9 red balls in the sample is 0.039 (Figure 2).  Going back to our survey data, this translates to the probability that in a population of 100 people, of which 60 people use tool A and 10 people use tool B, 9 or more people use both tools.

hg-calculation-example

Figure 2 Example of hypergeometric probability calculated using GeneProf.

In applying the hypergeometric test to our survey data, some additional considerations come into play.

Population size
First, for each combination of two tools, what should be taken as total population size (i.e. the 100 balls/100 people in the example above)? It might seem intuitive that that population is the total number of respondents (20,663 for the survey as a whole). However, it is actually better to use only the number of respondents who answered the survey questions where tools A and B occurred as answers.

People who didn’t answer both question cannot possibly have indicated using both tools A and B. In addition, the probability that at least x people are found to use tools A and B together is lower in a large total population than in a small population. This means that the larger the population, the smaller the number of respondents using both tools needs to be for that number to be considered significant. Thus, excluding people that did not answer both questions (and thereby looking at a smaller population) sets the bar higher for two tools to be considered preferentially used together.

Choosing the p-value threshold
The other consideration in applying the hypergeometric test to our survey data is what p-value to use as a cut-off point for significance. As said above, in a single experiment, a result with a p-value lower than 0.05 is commonly considered significant. However, with multiple comparisons (in this case: when a large number of tool combinations is tested in the same dataset), keeping the same p-value will result in an increased number of false-positive results (in this case: tools incorrectly identified as preferentially used together).

The reason is that a p-value of 0.05 indicates there is a 5% chance the observed result is due to chance.  With many observations, there will be inevitably be more results that may seem positive, but are in reality due to chance.

One possible solution to this problem is to divide the p-value threshold by the number of tests  carried out simultaneously. This is called the Bonferroni correction. In our case, where we looked at 119 tools (7 preset answer options for 17 survey questions) and thus at 7,021 unique tool combinations, this results in a p-value threshold of 0.0000071.

Finally, when we not only want to look at tools used more often together than expected by chance, but also at tools used less often together than expected, we are performing a 2-tailed, rather than a 1-tailed test. This means we need to halve the p-value used to determine significance, resulting in a p-value threshold of 0.0000036.

Ready, set, …
Having made the decisions above, we are now ready to apply the hypergeometric test to our survey data. For this, we need to know for each tool combination (e.g. tool A and B, mentioned as answer options in survey questions X and Y, respectively):

a) the number of people that indicate using tool A
b) the number of people that indicate using tool B
c) the number of people that indicate using both tool A and B
d) the number of people that answered both survey questions X and Y (i.e. indicated using at least one tool (including ‘others’) for activity X and one for activity Y).

These numbers were extracted from the cleaned survey data either by filtering in Excel (a,b (12 MB), d (7 MB)) or through an R-script (c, written by Roel Hogervorst during the Mozilla Science Sprint.

The cumulative probability function was calculated in Excel (values and calculations) using the following formulas:

=1-HYPGEOM.DIST((c-1),a,b,d,TRUE)
(to check for tool combination used together more often than expected by chance)

and
=HYPGEOM.DIST(c,a,b,d,TRUE)
(to check for tool combination used together less often than expected by chance)

excel-sucks

Figure 3 – Twitter

Bonferroni correction was applied to the resulting p-values as described above and conditional formatting was used to color the cells. All cells with a p-value less than 0.0000036 were colored green or red, for tools used more or less often together than expected by chance, respectively.

The results were combined into a heatmap with green-, red- and non-colored cells (Fig 4), which can also be found as first tab in the Excel-files (values & calculations).

[Update 20170820: we now also have made the extended heatmap for all preset answer options and the 7 most often mentioned ‘others’ per survey question (Excel files: values & calculations)]

heatmap-2-tailed-at-a-glance

Figure 4 Heatmap of tool combinations used together more (green) or less (red) often than expected by chance (click on the image for a larger, zoomable version).

Pretty colors! Now what?
While this post focused on methodological aspects of identifying relevant tool combinations, in future posts we will show how the results can be used to identify real-life research workflows. Which tools really love each other, and what does that mean for the way researchers (can) work?

Many thanks to Bastian Greshake for his helpful advice and reading of a draft version of this blogpost. All errors in assumptions and execution of the statistics remain ours, of course 😉