Data are out. Start analyzing. But beware.

Now that we have our data set on research tool usage out and shared the graphical dashboard, let the analysis start! We hope people around the world will find the data interesting and useful.

If you are going to do in depth analyses, make sure to read our article on the survey backgrounds and methods. It helps you understand the type of sampling we used and the resulting response distributions. It also explains the differences between the raw and cleaned data sets.

For more user friendly insights, you can use the graphical dashboard made in Silk. It is easy to use, but still allows for quite sophisticated filtering and even supports filtering answers to one question by answers given to another question. Please be kind on Silk: it crunches a lot of data and may sometimes need a few seconds to render the charts.

example chart with filter options

Example chart that also shows filter options in the dashboard

When looking at the charts and when carrying out your analyses, please note two things.

First, whatever you are going to do, make sure to reckon with the fundamental difference between results from preset answers (entered by simply clicking an image) and those from specifications of other tools used (entered by typing the  tool names manually). The latter are quite probably an underestimation and thus cannot be readily compared with the former. [Update 20160501: This is inherent to the differences between open and closed questions, of which ease of answering the question is one aspect. Specifications of ‘others’ can be seen as an open question]. This is why we present them separately in the dashboard  Integrated lists of these two types of results, if made at all, should be accompanied with the necessary caveats.

Frequency distribution of survey answers

Frequency distribution of 7 preset answers (dark blue) and the first 7 ‘other’ tools (light blue) per survey question

Second, basic statistics tells that when you apply filters, the absolute numbers in some cases can become so low as to render the results unfit for any generalization. And the other way around: when not filtering, please note that usage patterns will vary according to research role, field, country etc. Also, our sample was self-selected and thus not necessarily representative.

Now that we are aware of these two limitations, nothing stops you (and us) to dive in.

Our own priorities, time permitting, are to look at which tools are used together across research activities and why that is, concentration ratios of tools used for the various research activities, and combining these usage data with data on the tools themselves like age, origin, business model etc. More in general, we want to investigate what tool usage says about the way researchers shape their workflow: do they choose tools to make their work more efficient, open and/or reproducible?  We also plan to do a more qualitative analysis of the thousands of answers people gave to the question what they see as the most important development in scholarly communication.

By the way, we’d love to get your feedback and learn what you are using these data for, whether it is research, evaluation and planning of services or something else still. Just mail us, or leave your reply here or on any open commenting/peer review platform!

Support for Open Science in EU member states

In preparation for the EU Open Science Conference on April 4-5 in Amsterdam, we looked at what our survey data reveal about declared support for Open Access and Open Science among researchers in the EU.

Support for Open Access and Open Science

Of the 20,663 survey respondents, 10,297 were from the EU, of which 7,358 were researchers (from PhD-students to faculty). Most respondents provided an answer to the two multiple-choice questions on whether or not they support the goals of Open Access and Open Science, respectively. A large majority expressed support for Open Access (87%) and Open Science (79%) (see Fig 1).

OA/OS support from EU researchers

Fig. 1 Responses from EU researchers to survey questions on support for Open Access and Open Science

Even though support for Open Science is less than for Open Access, this does not mean that many more people actively state they do NOT support Open Science, as compared to Open Access (see Fig 1). Rather, more people indicate ‘I don’t know’ in answer to the question on Open Science. This could mean they have not yet reached an opinion on Open Science,  that they perhaps support some aspects of Open Science and not others, or simply that they found the wording of the question confusing.

It is interesting to note that the Open Access support figure roughly corresponds with results from Taylor & Francis Open Access surveys of 2013 and 2014, that reported only 16 and 11 percent respectively that agreed with the statement that there are no fundamental benefits to Open Access publication.


Differences between member states

When we look at the differences in professed support for Open Access and Open Science in the various EU member states (see Fig 2, Table 1) we see that support for Open Access is relatively high in many Western European countries. Here, more funding opportunities for Open Access are often available, either through institutional funds or increasingly through negotiations with publishers, where APCs are included in institutional subscriptions for hybrid Open Access journals. Perhaps many researchers in Southern and Eastern member states associate Open Access with either expensive APCs or with “free” or nationally oriented journals they wish to avoid because they are required to publish “international, highly ranked” venues.

Conversely, support for Open Science is higher in many ountries in Southern and Eastern Europe. As pure conjecture, may we state that in these regions, with sometimes less developed research infrastructures, the benefits of Open Science, e.g. for collaboration,  might be more apparant? The observed outliers to this general pattern (e.g. Belgium and Italy) illustrate both the limitations of these survey data (number of responses and possible bias) and the fact that the whole picture is likely to be more complicated.

OA-OS support EU member states

Fig. 2 Level of support for Open Access (left panel) and Open Science (right panel) in individual EU member states. Scale is based on non-weighted country averages. Results for states with less than 20 individual responses are omitted (see Table 1).

In general, the above differences between member states come into even clearer focus when support for Open Science is compared to that for Open Access, for each country. Fig 3 shows whether support for Open Science in a given country is higher or lower than for Open Access. Again, in most Western European countries Open Access is easily embraced while Open Science, perhaps because it is going further and being a more recent development, meets more doubt or even resistance. In many Southern and Eastern European countries, the pattern is reversed.  Clearly though, this cannot be the full story. Finding out what is behind these differences may valuably inform discussions on how to proceed with Open Access/Open Science policies and implementation.

OS vs. OA support EU member states

Fig. 3 Ratio of support for Open Science (OS) and Open Access (OA) in individual EU member states (red = relatively more support for OA than for OS, green = relatively more support for OS than OA). Scale is based non-weighted country ratios. Results for states with less than 20 individual responses were omitted (see Table 1).

Irrespective of differences between countries, the overall big majority support of Open Access as well as Open Science among European researchers is perhaps the most striking result. Of course, support not automatically implies that one puts ideas into practice. For this, it will be interesting to look at the actual research workflows of the researchers that took our survey, to see in how far their practices align with their stated support for Open Access and Open Science. Also, since our survey used a self-selected sample (though distribution was very broad), care should be taken in interpretation of the results, as they might be influenced by self-selection bias.

Data

The aggregated data underlying this post are shown in Table 1. For this analysis, we did not yet look at differences between scientific disciplines or career stage. Full (anonymized) data on this and all other survey questions will be made public on April 15th.

Do you support the goal of Open Access? Do you support the goals of Open Science?
Yes No I don’t know # responses Yes No I don’t know # responses
Austria 95% 2% 3% 60 83% 3% 14% 66
Belgium 89% 5% 6% 103 88% 3% 9% 102
Bulgaria 81% 14% 5% 21 72% 0% 28% 18
Croatia 85% 12% 3% 33 94% 0% 6% 31
Cyprus 69% 8% 23% 13 69% 8% 23% 13
Czech Republic 73% 13% 13% 75 69% 13% 18% 78
Denmark 90% 1% 9% 80 84% 0% 16% 82
Estonia 85% 8% 8% 13 92% 8% 0% 13
Finland 84% 4% 12% 92 83% 3% 14% 95
France 87% 5% 8% 686 79% 5% 16% 699
Germany 87% 3% 9% 1165 76% 7% 18% 1179
Greece 81% 7% 12% 214 85% 4% 12% 222
Hungary 89% 9% 2% 45 83% 10% 7% 41
Ireland 81% 5% 15% 62 82% 5% 13% 62
Italy 79% 7% 14% 407 77% 4% 18% 413
Latvia 86% 0% 14% 7 83% 0% 17% 6
Lithuania 88% 0% 13% 8 75% 13% 13% 8
Luxembourg 86% 0% 14% 7 57% 0% 43% 7
Malta 100% 0% 0% 8 75% 0% 25% 8
Netherlands 89% 2% 9% 1610 75% 5% 20% 1627
Poland 86% 7% 7% 85 88% 5% 7% 83
Portugal 88% 5% 8% 129 84% 5% 11% 133
Romania 80% 5% 15% 82 85% 5% 10% 82
Slovakia 70% 5% 25% 20 82% 6% 12% 17
Slovenia 96% 0% 4% 27 96% 0% 4% 28
Spain 87% 3% 10% 537 88% 2% 10% 542
Sweden 90% 3% 6% 146 76% 6% 19% 145
United Kingdom 88% 3% 9% 1113 79% 4% 17% 1123
Total 87% 4% 9% 6848 79% 5% 17% 6923

Table 1 Aggregated data on support of Open Access and Open Science per EU member state.

Rising stars: fastest growing tools on Twitter

To gain some insight in the popularity of online tools for scholarly communication, we have been tracking the number of Twitter followers monthly for each of the 600 tools in our growing database.

Twitter followers as a measure
The number of Twitter followers is one of many possible measures of interest/popularity, each with their own limitations (see for example, this blogpost by Bob Muenchen). Twitter follower data are freely available, allow for semi-automatic collection and are relatively transparent as accounts of followers can be checked. On the other hand, Twitter data have some limitations: only about 2/3 of the tools in our database have their own Twitter account, and following a tool’s tweets does not equal usage (in timing or volume). Also by definition it is restricted to people having Twitter accounts, which will favour younger generations and people more oriented to online communication. Finally, expression of interest by following happens at one moment in time so further rise or fall of a person’s interest in the tool is not reflected with this measure. Our global survey on research tool usage that is currently running, will provide more substantiated data on actual tool usage.

Given these limitations, we consider the number of Twitter followers to be an indication of potential interest in a tool. Looking over time, the rate of growth in Twitter followers can give an indication of tools that are most rapidly gaining interest.

Rising stars
The following tables show  the tools in our database with the largest relative increase in Twitter followers over the last six months (July 1, 2015-January 1, 2016), both for tools (n=207) that had over 1000 followers (Table 1) and tools (n=137) with between 100-1000 followers (Table  2) on July 1, 2015. We used these thresholds to filter out very early stages of Twitter accounts, that often see high growth rates with very low absolute numbers (e.g. a five-fold increase from 10-50)

Rank Tool / site Year of launch Research phase Twitter followers Jan 1, 2016 Twitter followers July 1, 2015 Relative increase
1 GitLab.com 2014 Publication 22.9K 12.9K 1.78
2 Jupyter 2015 Analysis 5519 3362 1.64
3 Open Library of Humanities 2014 Publication 4964 3102 1.60
4 Reddit Science 2008 Outreach 2183 1417 1.54
5 Qualtrics 2002 Analysis 11.7K 7634 1.53
6 BioRxiv 2013 Publication 3281 2235 1.47
7 Open Science Framework 2013 Preparation 4524 3127 1.45
8 Kaggle 2010 Preparation 42.3K 29.9K 1.41
9 Import.io 2013 Analysis 14.0K 10.0K 1.40
10 The Conversation 2011 Outreach 44.5K 33.3K 1.34

Table 1. Tools with the largest relative increase in Twitter followers – July 2015-January 2016 (> 1000 followers on July 1, 2015)

Rank Tool / site Year of launch Research phase Twitter followers Jan 1, 2016 Twitter followers July 1, 2015 Relative increase
1 Benchling 2013 Analysis 1419 689 2.06
2 Piirus 2014 Outreach 1877 915 2.05
3 Sciforum 2009 Outreach 242 120 2.02
4 Before the abstract 2014 Outreach 464 236 1.97
5 Mark2Cure 2014 Discovery 751 425 1.77
6 ManyLabs 2014 Analysis 196 111 1.77
7 SciVal 2009 Assessment 397 225 1.76
8 Elsevier Atlas 2014 Outreach 611 372 1.64
9 BookMetrix 2015 Assessment 286 175 1.63
10 Prolific Academic 2015 Analysis 783 490 1.60

Table 2. Tools with the largest relative increase  in Twitter followers – July 2015-January 2016 (100-1000 followers on July 1, 2015)

Some observations
The two groups of tools with fast-growing popularity on Twitter distinguished here likely represent different phenomena: established tools with continuously rising popularity and new tools that are fast gaining popularity. Assuming a more or less linear growth in Twitter followers, it will take longer to acquire (tens of ) thousands new followers than it will to gain a couple of hundred. This is reflected by the fact that the relative increase in Twitter followers is lower for the tools that had over 1000 followers in July  (Table 1) than for the tools that had between 100-1000 followers (Table 2). Similarly, the tools in Table 2 are somewhat more recent than those in Table 1.

Some notable exceptions are  Open Library of the Humanities and GitLab, which have quickly gained a very substantial following;  and Jupyter, which might have seen a lot of followers from @IPythonDev ‘transfer’ when IPython Notebooks continued as Project Jupyter. Not all ‘smaller’ tools are recent, too:  both SciForum and SciVal have been around for > 5 years but have only started out being active on Twitter recently. Also, SciVal may have been mainly interesting to university administrators at first, but has been made more accessible and interesting for ‘end user’ researchers.

Apparently, in scholarly communication tools do not go ‘viral’. Even for this group of fastest growers, the number of followers rarely doubles over the 6 month period.

Looking at the research phase the tools in the tables are aimed at, we predominantly see tools for Analysis, Publication and Outreach represented. For Analysis and Outreach, this might reflect the fact that potential users of these specific tools are relatively active on Twitter (perhaps more so than users of popular tools for e.g. Writing or Discovery). For Publication, it might also be a reflection of a growing interest in new publication models among various stakeholder groups in scholarly communication.

Of course, these are all post hoc explanations, that have not been tested, e.g. against a comparable set of tools with lower relative increase in Twitter followers, or substantiated by a more in-depth analysis, e.g. of the characteristics of people following these tools and sites on Twitter.

Tools per research activity
To further drill down into tools that are fast gaining popularity for specific research activities across the research cycle, the polar bar chart (Figure 1) below shows the tools with the highest relative increase in Twitter followers over the past six months (July 2015-January 2016) for each of 30 distinct research activities. Again, we focused on tools that had over 100 followers on July 1, 2015.

Twitter risers per research activity_jan2016_bottomlegend

Figure 1. Tools with the largest relative increase  in Twitter followers per research activity – July 2015-January 2016 (> 100 followers on July 1, 2015)

It is interesting to note that almost all tools in the polar bar graph are generic tools that can be used across the board of fields and disciplines. Of the 30 tools in this figure, only BioRxiv, DH Commons, OLH, Flypapers and Benchling are field specific. It’s too early to say that to see people flocking to your account you need to have a tool that can be used widely, but it is in line with a trend towards generic solutions. This could be something to dive further into once we have the tool usage data from our own survey.

Timeline of tools

timeline-of-tools-banner-2

The number and variety of online tools and platforms for all phases of the research cycle has grown tremendously over the years. We have been charting this ‘supply side’ of the scholarly communication landscape, first in our figure of 101 innovative tools in six different phases of the research cycle (Fig 1.), and subsequently in our growing database of tools for 30 distinct research activities within these phases.

InnoScholComm logo 250x250

Fig 1. 101 Innovative tools in six research phases

To get a visual impression of the development of tools over time, we plotted the 600 tools currently in our database against the year they were created (Fig 2., also available on Plot.ly).

tools-by-year-stacked-bar

Fig 2. New tools by research phase, 1994-2015

New online tool development rose sharply at the end of the 1990s and again and the end of the 2000s. The recent rise of 2013 and 2014 may be an artefact of the way we have been collecting these tools since 2013: through (social) media mentions, reviews in journals and crowdsourcing. All three sources focus on tools that have just been launched. Apart from special circumstances in higher education and research there may also be effects here of the dot.com bubble at the end of the 1990s and the web 2.0 explosion in the second half of the 2000s. The interesting peak of new outreach tools in 2008 is lacking a clear explanation. The slump in 2015 for all types of tools is due to the fact that it easily takes 6-12 months before new tools attract media attention.

A more detailed view of tool development emerges when tools are plotted separately for the different activities within research phases, against  the year  (and month, where possible) they were created (Fig 3, also available on plot.ly). As an extra layer of information, we added the current number of Twitter followers (where available) as a proxy for the interest a tool has generated.

In interpreting this plot, there are some important considerations regarding the underlying data that should be taken into account:

  • We have limited inclusion to online tools specifically,  excluding tools that are available as download only. Also, browser extensions are not included.
  • The picture for the early years  (up to c. 2006) is less complete than that for more recent years. For example, early traditional publisher platforms are not included and tools may have risen and fallen during this period and thus not have been added to our list.
  • We have assigned each tool to only one research activity. This affects the picture for tools that can be used for multiple research activities (like Mendeley, Figshare and ResearchGate).
  • The number of tools for the publication phase are distributed over many separate research activities, resulting in a seemingly less densely populated plot area.
  • It takes a while for (new) tools to accrue Twitter followers, which is why tools created in 2015 have relatively few Twitter followers.
  • The number of Twitter followers is one of many possible measures of interest/popularity, each with their own limitations (see for example, this blogpost by Bob Muenchen). Our global survey on research tool usage that is currently running, will provide more substantiated data on actual tool usage.

Taking these considerations into account, some interesting observations can be made:

  • The slow start of the development of online tools for academia is remarkable, given that the internet was developed (at universities !) decades ago. The first website was launched in 1991 and the first graphical browser (MOSAIC) was introduced two years later, but it took until 1997 before PubMed became available and another four years before the first web-based reference management tool (RefWorks) was launched.
  • Of the 30 research activities we identify, search (nr. 3), experiment (nr. 9), publish (nr. 24) and outreach (n.25) have had the longest continuous period of (online) tool development. For these activities, many online tools have been developed prior to 2008. Activities for which tools have only become available more recently are getting access (nr. 4) and post-publication peer review and commenting (nr. 27/28).
  • Relatively few tools exist for sharing posters and presentations specifically, and no new ones have been developed recently. However, there are many other tools (like FigShare and ScienceOpen) that enable archiving posters as one of their functionalities.
  • Tools for the writing, outreach en assessment phases often have many followers, perhaps because these tools are often relevant for all research disciplines (and, for writing and outreach, even beyond academia). Tools for discovery and publication are more often discipline-specific, and might reflect persistent differences in publication cultures and the desire for selectivity.
400+ tools - bubble chart - Dec 2015

Fig 3. Tools per research phase and year – bubble size: Twitter followers (logarithmic)

101 days to go for 101 innovations survey

With 101 days to go before the 101 Innovations in Scholarly Communication survey closes it seems a good moment to let you know how far we have come and what’s still ahead.

Response volume

At October 31st the (English) survey had garnered a total of 5373 responses. Daily responses are steady and sometimes show a peak due to distribution efforts of partnering institutions. It is good to see that many respondents also take the effort to answer the open question.

Breakdown by research roles and disciplines

Faculty, PhD students and postdocs are the three biggest groups responding. Most respondents are from life sciences. Other disciplines are also well represented with only law lagging. But these are absolute figures and should be compared to populations of course. The initial bias of librarians has weakened, but they are still overrepresented. If every librarian that takes the survey would pass it on to three researchers…

101 days - RR

101 days - disciplines

Translation into 6 world languges

In October the survey was translated into 6 languages, after we saw that response in some countries was relatively low. Next to English, it is now available in Spanish, French, Russian and Chinese, while Japanese and Arabic will follow suit. Any help reaching out research communities in these language areas is appreciated.

Custom URL partners

Some 60 institutions have partnered with us so far. In exchange for distributing the survey they get the resulting data for their institution. We hope to find still more partners, especially in the language areas now served with the translations. Reaching out to libraries that often act as intermediary does also involve convincing them this is not acquisition, but an option to gain insight into patrons’ research practices, at no cost.

Press  & presentations

Generally the survey is very well received, because of its timeliness and graphical layout. The initial poster from which the survey grew was featured on the InsideHigherED blog and the survey and broader project subject of a podcast on the Scholarly Kitchen blog. We presented the ideas behind it at OAI9 conference in Geneva and at the Open Access week 2015 meeting in Brussels, and showed an example of some preliminary results at the 2:AM altmetrics conference in Amsterdam.

Still ahead

In the next 101 days we hope to see the number of responses and partnering institutions double. But we will also work on the next steps: preparing the data for release, prepare some scripts for our own analyses, find a way to offer the data in a friendly dashboard style for anyone to wortk with and interest other researchers (you?) to use the data for testing all kinds of hypotheses.

Tools and sites used for impact measurement (some preliminary results)

101 Innovations - Assessment segment

For the 2:AM Altmetrics conference on Oct 7-8, 2015 in Amsterdam, we looked at some of our preliminary survey results on tools and sites used for impact measurement. For this, we did a brief non-statistical analysis of responses up until October 1 from researchers (PhD-students, postdocs and faculty, n=3481) and librarians (n=638), comparing their answers to the following survey question:

101-innovations-survey-impact

The percentage of respondents that selected the various tools shown as preselected options in the survey question notably differed between researchers and librarians. Among researchers, traditional tools like JCR (impact factor), Web of Science and Scopus are considerably more often used  than tools for altmetrics (Altmetric, ImpactStory and PLOS article-level metrics). Librarians, however, selected Altmetric about as often as the more traditional metrics tools.

It should be noted here that in the survey, respondents who support researchers (rather than actively carry out research themselves) are asked to indicate which tools they recommend (rather than actively use). This means that the results for librarians indicate endorsement, rather than active use.

The fact that librarians much more often selected altmetrics tools like Altmetric and ImpactStory, compared to researchers,  could either reflect a greater awareness or a greater enthousiasm for these tools.

impact-percentage respondents

Percentage of respondents that indicated to use (researchers) or recommend (librarians) the tools shown as preselected options in the survey question

To investigate possible differences between fields in the use of altmetrics tools, we broke down researchers’  responses according to discipline, and looked at the share of altmetrics tools among all metrics tools mentioned by these groups. This analysis included tools mentioned in the ‘others’ category.

Perhaps surprisingly, these preliminary results indicate a relatively large share of altmetrics tools mentioned by researchers in the Arts & Humanities, compared to many other discplines. It should be noted that sample size for this group was comparatively small, and we did not currently do any statistical analysis. Having said that, it could be hypothesized that because Arts & Humanities scholars traditionally have limited use for citation databases like Scopus and Web of Science due to the coverage of these databases and the absence of humanities from JCR, they perhaps more readily embrace the opportunity of measuring the impact of their research output via altmetrics, in as far as this output can indeed be identified by altmetrics providers.

As yet we have no explanation for the observed difference between physical and life sciences, other than perhaps the wild guess that researchers in physical sciences still don’t consider altmetrics as seriously yet.

impact-altmetrics share

Share of altmetrics tools among all tools selected or mentioned by researchers, per discipline

Finally, what other tools were mentioned by participants, in addition to the 7 preselected tools shown above? Across disciplines, by far the most often mentioned ‘other’ tool was Google Scholar (which was not counted as an altmetrics tool), followed by ResearchGate (which was). Strikingly, almost nobody mentioned using alternative journal rankings.

Disclaimer: the results shown here are based on preliminary data, and are to be treated as such. No claims are made as to the statistical significance of the results, or lack of bias in the data. Our survey is running until February 2016, with many institutional partners yet to start their distribution. In addition, we are currently rolling out translations of the survey to increase participation in non-Western countries.

The poster addendum with these results, showed at the 2:AM Altmetrics conference, is also available on Figshare:
http://dx.doi.org/10.6084/m9.figshare.1572175

4000 survey responses – geographical distribution and the need for translation

Last week we silently passed the 4,000 responses mark on our survey. With the summer season waning it seems a good moment to look at where we stand. The survey has been running for 15 weeks, with another 23 weeks to go. We’re glad to have 4,000 responses, but they are not nearly enough to allow for detailed analyses, e.g. by field and country. We would like to see that number double or triple before the survey ends on February 10, 2016. And what is perhaps more important: we would like to see a more or less even global distribution.

A self-selected non-probability sample as the one we work with is bound to have a lot of biases in the response, due to uneven distribution and uptake across groups and countries. The levels of survey uptake in countries is probably affected by:

  • (Effect of) distribution and promotion actions
  • Propensity of people in a certain country to take surveys
  • Degree to which a survey on research tools is considered relevant or interesting
  • Ability of targets groups to understand the survey, largely due to differences in (foreign) language proficiency
click to enlarge

Response levels per 100 billion US$ GDP at August 22, 2015, weighted average = 5,1

This map shows the geographical variation in uptake of our survey. To make things comparable we need to use relative numbers of responses. Ideally we’d have them relative to the number of researchers in each country. However those figures are not available for most countries. Instead we use GDP of 2013/2014 (Worldbank data) as a proxy as we expect countries to have more active researchers if their economy is larger.

The map shows response levels at or above average (green) in many countries in Europe, Oceania and Canada. Uptake in Russia, Latin America and South Asia is below average (orange/yellow). Despite many responses from the US, that country is also still slightly below average with 4.53. Levels in many countries in East Asia, the Arab World and Africa are very low (red) or even zero (white).

As said, many factors come into play, but it seems obvious that to increase levels outside Europe and Anglo-Saxon countries, translation into a few world languages would help. To find out which languages are the most important for us, we calculated for all language areas the number of responses needed to get below average country levels to the average, relative to their GDP:

language responses needed to get to average
Chinese, simplified 484
Japanese 170
Arabic 124
Spanish 104
Portuguese 82
French 68
Korean 59
Russian 56
Bahasa Indonesia 43

This means that we are now working towards having the survey and some other texts translated into …

  • simplified Chinese
  • Japanese
  • Arabic
  • Spanish
  • French
  • Russian

whereas we hope to increase uptake in Brazil, Korea and Indonesia by partnering with local institutions to distribute the English version of the survey.

We are looking for support in reviewing, testing and distributing the translations in these six languages. If you have any ideas or contacts that might be helpful for that, please let us know!

First 1000 responses – tool combinations

Apart from looking at the most popular tools for single research activities, we can also look at which tools are used together.

Below is a first attempt at visualizing this for the first 1000 responses to our survey on scholarly communication. For this figure, we looked at the four most popular tools listed for each research activity (including, in this case, tools mentioned by respondents as ‘others’). For each tool combination, the absolute number of people (out of 1000) that uses both tools is visualized.

[click on the figure for an interactive version made with plot.ly]

Tool combinations (first 1000 responses)

This figure is merely intended as an exploration of what is possible with our survey data. These preliminary results are expected to be biased due to the distribution channels (Twitter, mailing lists) used in the early weeks of the survey. Also, results plotted as relative, rather than absolute values will be much more informative. This is something we are currently working on.

However, even looking at the patterns in the figure above, some interesting observations can be made:

  • For some activities, like sharing protocols and peer review outside that organized by journals, very few people mention using any tools at all;
  • Some tools are clearly being used for multiple activities (like ResearchGate for sharing publications and researcher profiling, among other activitiies), which is obviously what the makers of the tool (or the publishers that buy them) are hoping for;
  • We can identify some popular tools that are hardly ever used together (like LaTeX and SPSS). While this specific example most likely reflects separate populations of researchers, these kinds of observations are also interesting in view of interoperability of tools.

Of course, we need way more responses to be able to draw any meaningful conclusions and to break down the results for researchers of different disciplines, career stages and countries. We especially hope that institutions/libraries will take the opportunity to distribute the survey and get the data for their institution.

Please help us spread the survey!

First 1000 responses – most popular tools per research activity

In this post we present  a straight percentage count of the preselected options for the 17 research activities that we asked about in our survey Innovations in Scholarly Communication. These figures represent the first 1000 responses.

Please keep in mind that these first 1000 reponses do have a bias due to the predominant distribution methods in the first weeks of the survey (Twitter and other social media, and mailing lists). The final results will probably substantially deviate from what you see below, and be more reliable. We need way more response and hope that institutions/libraries will take the opportunity to distribute the survey and get the data for their institution.

Also note that many respondents specified which other tools they use, in addition to the preselected options. Together the 1000 respondents mentioned over 1000 (!) different tools not yet included in these counts. Further, sizeable groups skipped questions (which they were allowed to) or specified that they used “none”.  More about these issues in a later post.

We are not going to formally analyse these figures now but merely present them to show what types of counts are possible. We’re seeing interesting patterns already but are careful not to jump to conclusions in this early hour. Please leave comments if you wish.

Typeform 1000 - Search

Typeform 1000 - Access

Typeform 1000 - Alerts

Typeform 1000 - Read

Typeform 1000 - Analyze

Typeform 1000 - Notebooks

Typeform 1000 - Write

Typeform 1000 - Reference management

Typeform 1000 - Share publications

Typeform 1000 - Share data

Typeform 1000 - Select journal

Typeform 1000 - Publications

Typeform 1000 - Share posters-presentations

Typeform 1000 - Outreach

Typeform 1000 - Researcher profiles

Typeform 1000 - Peer review

Typeform 1000 - Measure impact

First 1000 responses – demographics

On these pages we inform you on the progress being made with the survey Innovations in Scholarly Communication. We will share preliminary results, some methodological backgrounds, experiences and reactions that people share with us, and more.

Within the first month, we received over 1000 responses to our survey. A good start! (but not nearly enough, so please keep spreading the word).

In the next few posts, we’ll present some preliminary results based on the first 1000 responses. Please bear in mind these results are expected to be biased due to the way we have distributed the survey so far (mainly through Twitter and other social media and via mailing lists).

Demographics
The responses show a nice distribution across research roles and disciplines, although librarians are somewhat overrepresented so far. We hope librarians will not only fill in the survey themselves, but also promote it within their institution! With more responses, we will be able to break down results according to research role etc.

1000 responses - Research role

1000 responses - Discipline

Looking at the geographical distribution of respondents (based on the answers to the question ‘What is the country of your current (or last) affiliation?’) reveals that we have received responses from people in 64 counties, with most coming from the US, the UK and the Netherlands. Still quite a way to go before we have a truly global distribution!

1000 responses- geographical distribution