Plan S feedback

Feedback on the guidance on the Implementation of Plan S by

Bianca Kramer https://orcid.org/0000-0002-5965-6560

Jeroen Bosman https://orcid.org/0000-0001-5796-2727

Dated 20190208

 

 

We have a few overall recommendations:

  • Improve on the why: make it more clear that Plan S is part of a broader transition towards open science and not only to make papers available and OA cheaper. It is part of changes to make science more efficient, reliable and reusable.
  • Plan S brings great potential, and with that also comes great responsibility for cOAlition S funders. From the start, plan S has been criticized for its perceived focus (in intent and/or expected effects) on APC-based OA publishing. In our reading, both the principles and the implementation guidance recognize for all forms of full OA publishing, including diamond OA and new forms of publishing like overlay journals. However, it will depend to no small extent on the actual recognition and support of non-APC based gold OA models by cOAlitionS funders whether plan S will indeed encourage such bibliodiversity and accompanying equity in publishing opportunities. Examples of initiatives to consider in this regard are OJS journal systems by PKP, Coko open source technology based initiatives, Open Library of Humanities, Scoap3, Free Journal Network, and also Scielo and Redalyc in Latin America.
  • The issue of evaluation and assessment is tied closely to the effects Plan S can or will have. It is up to cOAlitionS funders to take actionable steps to turn their commitment to fundamentally revise the incentive and reward system of science in line with DORA into practice, at the same time they are putting the Plan S principles into practice. The two can mutually support each other, as open access journals that also implement other open science criteria such as pre-registration, requirements for FAIR data and selection based on rigorous methodological criteria will facilitate evaluation based on research quality.  
  • Make sure to (also) provide Plan S in the form of one integrated document containing the why, the what and the how on one document. Currently it is too easy to overlook the why. That document should be openly licensed and shared in a reliable archive.
  • In the implementation document include a (graphical) timeline of changes and deadlines.

 

Looking at your first question for feedback (Is there anything unclear or are there any issues that have not been addressed by the guidance document?) we would like to bring a number of issues to your attention.

 

Feedback on article 2:

  • There is uncertainty over acceptance of overlay journals and generally journal external peer review systems. The implementation document lists as a basic requirement for journals and platforms that they are registered in DOAJ or applying for registration with DOAJ. The problem is that we are not sure whether DOAJ will list/accept non-journals peer review platforms or overlay journals. They do list SciPost physics, but Scipost considers itself a full fledged publication platform. We understand that it is the cOAlition’s intention to support this route, but as it is in some ways unchartered territory, it would be wise to specifically indicate how quality certification is done for non-journal venues

 

Feedback on article 8:

  • Acknowledging the resulting limits on potential (re)use, consider including an opt-out of the license requirements by accepting CC-BY-ND when requested, in order to increase support of humanities.

 

Feedback on article 9:

  • Acceptance of separation of publishing and peer review in 2 locations/systems.
    The implementation guidance text potentially casts some doubts about the eligibility of overlay journals when the publication (including any revisions following peer review) resides on e.g. a preprint server or repository, rather than being published on the overlay journal platform. In these cases, only the peer review is taken on by the overlay journal, and the article would of course be listed as being included in the overlay journal. In terms of the four traditional functions of publishing, the overlay journal would serve the functions of certification and dissemination, but not those of registration and archiving.
    Open Access platforms referred to in this section are publishing platforms for the original publication of research output (for example scholarly articles and conference proceedings). Platforms that merely serve to aggregate or re-publish content that has already been published elsewhere are not included. In this regard, it is also interesting to note that Jean-Sebastian Caux commented on our earlier version of the then-eight routes that he does not consider SciPost an overlay journal in that sense of the word, because SciPost does publish articles on its own platform (https://101innovations.wordpress.com/2018/10/22/eight-routes-towards-plan-s-compliance/#comment-203). A possible way to elucidate the intent of cOAlition S in this regard  might be to explicitly mention (perhaps added to the paragraph quoted above) that overlay journals taking on peer review and publishing the resulting articles are compliant, even when the articles themselves do not reside on the platform of the overlay journal. But this is indeed relatively uncharted territory.

 

Feedback on articles 9 and 10:

  • The are quite some (technical) requirements for journals and repositories. We would like to see cOAlition S to commit to support the implementation of those requirements by smaller (esp. non-APC-based) journals and repositories. This can be done by (financially) supporting technical solutions and co-organize training, materials (e.g. video) and meetings to help implementation.
  • The requirements for journals do not seem to apply to hybrid journals in transformative agreements. This creates the strange situation that a lot of hybrid journals will be held to much lower standards than full OA journals, platforms and repositories and do not have to invest until (in some cases, depending on agreement timing) 2025. To redress this to some extent, we would like to advise relaxation of the technical and other requirements mentioned in article 9.2 and 10.2  (XML, JATS (or equivalent), API, CC0 metadata incl. references, and transparent cost/prices) for instance until 2021 (instead of 2020).

 

Feedback on article 11:

  • It says now “COAlition S acknowledges existing transformative agreements. However, from 2020 onward, new agreements need to fulfil the following conditions to achieve compliance with Plan S”. There is a chance that by pre-2020 signing of long term contracts hybrid could remain compliant even after 2024. To avoid that we would change the wording to include a maximum running period length for existing (pre-2020) contracts to be acknowledged. E.g. change this into “COAlition S acknowledges existing transformative agreements with contract periods that do not go beyond 2022”.
  • We also recommend replacing ‘existing transformative agreements’ with ‘existing off-setting, read-and-publish and publish-and-read agreements’ to prevent confusion as to what is meant by ‘transformative agreements’.
  • It says now “The negotiated agreements need to include a scenario that describes how the publication venues will be converted to full Open Access after the contract expires.” To avoid leaving room for multiple interpretations of the flipping deadline we would change the phrasing in such a way that it is beyond any doubt what is meant exactly. (E.g. “at the moment the contract expires”, or “within a year after the contract expires”.)

 

 

Towards a Plan S gap analysis? (2) Gold open access journals in WoS and DOAJ

(NB this post is accompanied by a another post, on open access potential across disciplines, in the light of Plan S)

In our previous blogpost, we explored open access (OA)  potential (in terms of journals and publications) across disciplines, with an eye towards Plan S. For that exercise, we looked at a particular subset of journals, namely those included in Web of Science. We fully acknowledge this practical decision leads to limitation and bias in the results. In particular this concerns a bias against:

  • recently launched journals
  • non-traditional journal types
  • smaller journals not (yet) meeting the technical requirements of WoS
  • journals in languages other than English
  • journals from non-Western regions

To further explore this bias, and give context to the interpretation of results derived from looking at full gold OA journals in Web of Science only, we analyzed the inclusion of DOAJ journals in WoS per major discipline.

We also looked at the proportion of DOAJ journals (and articles/reviews therein) in different parts of the Web of Science Core Collection that we used: either in the Science Citation Index Expanded (SCIE) / Social Sciences Citation Index (SSCI) /Arts & Humanities Citation Index (AHCI), or in the Emerging Sources Citation Index (ESCI).

The Emerging Sources Citation Index contains a range of journals not (yet) indexed in the other citation indexes, including journals in emerging scientific fields and regional journals. It uses the same quality criteria for inclusion as the other citation indexes, notably: journals should be peer reviewed, follow ethical publishing practices, meet Web of Science’s technical requirements, and have English language bibliographic information. Journals also have to publish actively with current issues and articles posted regularly. Citation impact and a strict publication schedule is not a criterion for inclusion of journals in ESCI, which means that also newer journals can be part of ESCI. Journals in ESCI and the AHCI do not have a Clarivate impact factor.

Method
We compared the number of DOAJ journals in Web of Science to the total number of journals in DOAJ per discipline. For this, we made a mapping  of the LCC-classification used in DOAJ to the major disciplines used in Web of Science, combining Physical Sciences and Technology into one to get four major disciplines.

For a number of (sub)disciplines, we identified the number of full gold journals in Web of Science Core Collection, as well as the number of publications from 2017 (articles & reviews) in those journals. We also looked what proportion of these journals (and the publications therein) are listed in ESCI as opposed to SCIE/SSCI/AHCI. For subdisciplines in Web of Science, we identified 10 research areas in each major discipline with the highest number of articles & reviews in 2017. Web of Science makes use of data from Unpaywall for OA classification at article-level.

All data underlying this analysis are available on Zenodo: https://doi.org/10.5281/zenodo.1979937

Results

Looking at the total number of journals in DOAJ and the proportion thereof included in Web of Science (Fig 1, Table 1) shows that Web of Science covers only 32% of journals in DOAJ, and 66% of those are covered in ESCI. For Social Sciences and Humanities, the proportion of DOAJ journals included in WoS is only 20%, and >80% of these journals are covered in ESCI, not SSCI/AHCI. This means that only looking at WoS leaves out 60-80% of DOAJ journals (depending on discipline), and only looking at the ‘traditional’ citation indexes SCIE/SSCI/AHCI restricts this even further.

Gold all 0

Fig 1. Coverage of DOAJ journals in WoS

DOAJ-WoS table.png

Table 1. Coverage of DOAJ journals in WoS (percentages)

We then compared the the proportion of DOAJ journals covered in SCIE/SSCI/AHCI versus ESCI, to the proportion of publications in those journals in the two sets of citation indexes (Fig 3). This reveals that for Physical Sciences & Technology and for Life Sciences & Medicine, the majority of full gold OA articles in WoS is published in journals included in SCIE, indicating that journals in ESCI might predominantly be smaller, lower volume journals. For Social Sciences and for Humanities, however, journals in ESCI account for the majority of gold OA articles in WoS. This means that due to WoS indexing practices, a large proportion of gold OA articles in these disciplines is excluded when considering only what’s covered in SSCI and AHCI.

Gold all 1-2 large

Fig 2. Gold OA journals and publications in WoS

The overall patterns observed for the major disciplines can be explored more in detail when looking at subdisciplines (Fig 3). Here, some interesting differences between subdisciplines within a major discipline emerge.

  • In Physical Sciences and Technology, three subdisciplines (Engineering, Mathematics and Computer Sciences) have a large proportion of full OA journals that is covered in ESCI rather than SCIE, and especially for Engineering, these account for a sizeable part of full gold OA articles in that subdiscipline.
  • In Life Sciences and Biomedicine,  General and Internal Medicine seems to be an exception with both the largest proportion  of full OA journals in ESCI as well as the largest share of full gold OA publications coming from these journals. In contrast, in Cell Biology, virtually all full gold OA publications are from journals included in SCIE.
  • In Social Sciences, only in Psychology a majority of full gold OA publications is in journals covered in SSCI, even though for this discipline, as for all other in Social Sciences, the large majority of full gold OA journals is part of ESCI, not SSCI.
  • In Arts & Humanities the pattern seems to be consistent across subdisciplines, perhaps with the exception of Religion, which seems to have a relatively large proportion of articles in AHCI journals, and Architecture, where virtually all journals (and thus, publications) are in ESCI, not AHCI.

Gold PT 1-2 large
Gold LM 1-2 large
Gold SOC 1-2 large

Gold AH 1-2 large

Fig 3. Full gold OA journals and publications in Web of Science, per subdiscipline

Looking beyond traditional citation indexes

Our results clearly show that in all disciplines, the traditional citation indexes in WoS (SCIE, SSCI and AHCI) cover only a minority of existing full gold OA journals. Looking at publication behaviour, journals included in ESCI account for a large number of gold OA publications in many (sub)disciplines, especially in Social Sciences and Humanities. Especially in terms of an analysis of availability of full OA publication venues in the context of Plan S, it will be interesting to look closer at titles included in both SCIE/SSCI/AHCI and  ESCI per (sub)discipline and assess the relevance of these titles to different groups of researchers within that discipline (for instance by looking at publication volume, language, content from cOAlitions S or EU countries, readership/citations from cOAlition S or EU countries). Looking at publication venues beyond traditional citation indexes fits well with the ambition of Plan S funders to move away from evaluation based on journal prestige as measured by impact factors. It should also be kept in mind that ESCI marks but a small extension of coverage of full gold OA journals, compared to the large part of DOAJ journals that are not covered by WoS at all.

Encore: Plan S criteria for gold OA journals

So far, we have looked at coverage of all DOAJ journals, irrespective of whether they meet specific criteria of Plan S for publication in full OA journals and platforms, including copyright retention and CC-BY license*.

Analyzing data available through DOAJ (supplemented with our mapping to WoS major disciplines) shows that currently, 28% of DOAJ journals complies with these two criteria (Fig 4). That proportion is somewhat higher for Physical Sciences & Technology and Life Sciences & Medicine, and lower for Social Sciences & Humanities. It should be noted that when a journal allows multiple licenses (e.g. CC-BY and CC-BY-NC-ND), DOAJ includes only the most strict license in its journal list download. Therefore, the percentages shown here for compliant licensing are likely an underestimation. Furthermore, we want to emphasize that this analysis reflects the current situation, and thereby could also be thought of as pointing towards the potential of available full OA venues if publishers adapt their policies on copyright retention and licensing to align with criteria set out in Plan S.

Copyright criteria (CC-BY and copyright retention) of DOAJ journals_empty

Fig 4. Copyright criteria (CC-BY and copyright retention) of DOAJ journals

*The current implementation guidance also indicated that CC-BY-SA and CC0 would be acceptable. These have not been included in our analysis (yet).

Towards a Plan S gap analysis? (1) Open access potential across disciplines

(NB this post is accompanied by a second post on presence of full gold open access journals in Web of Science and DOAJ)

In the proposed implementation guidelines for Plan S, it has become clear there will be, for the coming years at least, three ways to open access (OA) that are compliant with Plan S:

  • publication in full open access journals and platforms
  • deposit in open access repositories of author accepted manuscript (AAM) or publisher version (VOR)
  • publishing in hybrid journals that are part of transformative agreements

Additional requirements concern copyright (copyright retention by authors or institutions), licensing (CC-BY, CC-BY-SA or CC0), embargo periods (no embargo’s) and technical requirements for open access journals, platforms and repositories.

In the discussion surrounding plan S, one of the issues that keeps coming back is how many publishing venues are currently compliant. Or, phrased differently, how many of their current publication venues researchers fear will no longer be available to them.

However, the current state should be regarded as a starting point, not the end point. As Plan S is meant to effect changes in the system of scholarly publication, it is important to look at the potential for moving towards compliance, both on the side of publishers as well as on the side of authors.

https://twitter.com/lteytelman/status/1067635233380429824

Method
To get a first indication as to what that potential for open access is across different disciplines, we looked at a particular subset of journals, namely those in Web of Science. For this first approach we chose Web of Science because of its multidisciplinary nature, because it covers both open and closed journals, because it has open access detection and because it offers subject categories and finally, because of its functionality in generating and exporting frequency tables of journal titles. We fully recognize the inevitable bias related to using Web of Science as source, and address this further below and in an accompanying blogpost.

For a number of (sub)disciplines, we identified the proportion of full gold, hybrid and closed journals in Web of Science, as well as the proportion of hybrid and closed journals that allows green open access by archiving AAM/VOR in repositories.  We also looked at the number of publications from 2017 (articles & reviews) that were actually made open access (or not) under each of these models.

Some methodological remarks:

  • We used the data available in Web of Science for OA classification at the article level. WoS uses Unpaywall data but imposes its own classification criteria:
    • DOAJ gold: article in journal included in DOAJ
    • hybrid: article in non-DOAJ journal, with CC-license
      (NB This excludes hybrid journals that use a publisher-specific license)
    • green: AAM or VOR in repository 
  • For journal classification we did not use a journal list, but we classified a journal as gold, hybrid and/or allowing green OA if at least one article from 2017 in that journal was classified as such. This method may underestimate:
    • journals allowing green OA in fields with long embargo’s (esp. A&H)
    • journals allowing hybrid or green OA if those journals have very low publication volumes (increasing the chance that a certain route is not used by any 2017 paper)
  • We only looked at green OA for closed articles, i.e. when articles were not also published OA in a gold or hybrid journal.
  • Specific plan S criteria are not (yet) taken into account in these data, i.e. copyright retention, CC-BY/CC-BY-SA/CC0 license, no embargo period (for green OA) and being part of transformative agreements (for hybrid journals)
  • For breakdown across (sub)disciplines, we used WoS research areas (which are assigned at the journal level). We combined Physical Sciences and Technology into one to get four major disciplines. In each major discipline, we identified 10 subdisciplines  with the highest number of articles & reviews in 2017 ((excluding ‘other topics’ and replacing Astronomy & Astrophysics for Mechanics because of specific interest in green OA in Astronomy & Astrophysics)
  • We used the full WoS Core collection available through our institution’s license, which includes the Science Citation Index Expanded (SCIE), the Social Sciences Citation Index (SSCI), the Arts & Humanities Citation Index (AHCI) and the Emerging Sources Citation Index (ESCI).

All data underlying this analysis are available on Zenodo:
https://doi.org/10.5281/zenodo.1979937

Results

As seen in Figure 1A-B, the proportion of full gold OA journals is relatively consistent  across major disciplines, as is the proportion of articles published in these journals. Both are between 15-20%. Despite a large proportion of hybrid journals in Physical Sciences & Technology and Life Sciences & Medicine, the actual proportion of articles published OA in hybrid journals is quite low in all disciplines. The majority of hybrid journals (except in Arts & Humanities) allow green OA, as do between 30-45% of closed journals (again except in Arts&Humanities). However, the actual proportion of green OA at the article level is much lower. As said, embargo periods (esp. those exceeding 12 months) might have an overall effect here, but the difference between potential and uptake remains striking.

https://101innovations.files.wordpress.com/2018/11/all1.png

All2

Fig 1A-B. OA classification of journals and publications (Web of Science, publication year 2017)

Looking at subdisciplines reveals interesting differences both in the availability of open access options and the proportion of articles & reviews using these options (Fig 2).

  • In Physical Sciences and Technology, the percentage of journals that is fully gold OA is quite low in most fields, with slightly higher levels in energy fuels, geology, optics and astronomy. Uptake of these journals is lower still, with only the optics and geology fields slightly higher. Hybrid journals are numerous in this discipline but see their gold and green open access options used quite infrequently. The use of green OA for closed journals, where allowed, is also limited, with the exception of astronomy.  (but note that green sharing of preprints is not included in this analysis). In all fields in this discipline over 25% of WoS indexed journals seem to have no open options at all. Of all subdisciplines in our analysis, those in the  physical sciences fields display the starkest contrast between the ample OA options and their limited usage.
  • In Life Sciences & Biomedicine, penetration of full gold OA journals  is higher than in Physical sciences, but with starker differences, ranging from very low levels in environmental science and molecular biochemistry to much higher levels for general internal medicine and agriculture. In the Life sciences and Biomedicine discipline, uptake of gold OA journals is quite good, again especially in general internal medicine. Availability of hybrid journals is quite high but their use is limited; exceptions are cell biology and cancer studies that do show high levels of open papers in hybrid journals. Green sharing is a clearly better than in Physical sciences, especially in fields like neurosciences, oncology and cell biology (likely also due to PMC / EuropePMC) but still quite low given the amount of journals allowing it.
  • In Social Sciences there is a large percentage of closed non-hybrid subscription journals, but many allow green OA sharing. Alas the uptake of that is limited, as far as detected using Unpaywall data. In this regard the one exception is psychology, with a somewhat higher level of green sharing. Hybrid OA publishing is available less often than in Physical Sciences or Life Sciences, but with relatively high shares in psychology, sociology, geography and public administration. The fields with the highest shares of full gold OA journals are education, linguistics, geography and communication, with usage of gold in Social Sciences more or less corresponding with full gold journal availability.
  • In Arts & Humanities, the most striking fact is the very large share of journals offering no open option at all. Like in Social Sciences, usage of gold across Humanities fields more or less corresponds with full gold journal availability. Hybrid options are limited and even more rarely used, except in philosophy fields. Green sharing options are already limited, but their use is even lower.

PT 1-2 large

LM 1-2 largeSOC 1-2 large

AH 1-2 large

Fig 2. OA classification of journals and publications in different subdisciplines (Web of Science, publication year 2017)

Increasing Plan-S compliant OA 

Taking these data as a starting point (and taking into account that the proportion of Plan S compliant OA will be lower than the proportions of OA shown here, both for journals and publications), there are a number of ways in which both publishers and authors can increase Plan S-compliant OA (see Fig 3):

  • adapt journal policies to make existing journals compliant
    (re: license, copyright retention, transitional agreements, 0 embargo)
  • create new journals/platforms or flip existing journals to full OA (preferably diamond OA)
  • encourage authors to make use of existing OA options (by mandates, OA funding (including for diamond OA) and changes in evaluation system)

We also made a more detailed analysis of nine possible routes towards plan S-compliance (including potential effects on various stakeholders) that might be of interest here.

Towards compliancy

Fig 3. Ways to increase Plan S-compliant OA

Towards a gap analysis? Some considerations

In their implementation guidance, cOAlition S states it will commission a gap analysis of Open Access journals/platforms to identify fields and disciplines where there is a need to increase their share. In doing so, we suggest it would be good to not only look at the share of currently existing gold OA journals/platforms, but view this in context of the potential to move towards plan S compliance, both on the side of publishers and authors. Filling any gaps could thus involve supporting new platforms, but also supporting flipping of hybrid/closed journals and supporting authors in making use of these options, or at least considering the effect of the latter two developments on the expected gap size(s).

Another consideration in determining gaps is whether to look at the full landscape of (Plan S-compliant) full gold journals and platforms, or whether to make a selection based on relevance or acceptability to plan S-funded authors, e.g.  by impact factor, by inclusion in an ‘accepted journal list’ (e.g. the Nordic list(s) or the ERA-list) or by other criteria. In our opinion, any such selection should be presented as an optional overlay/filter view, and preferably be based on criteria other than journal prestige, as this is exactly what cOAlition S wants to move away from in the assessment of research.  Some more neutral criteria that could be considered are:

    • Language: English and/or at least one EU language accepted?
    • Content from cOAlition S or EU countries?
    • Readership/citations from cOAlition S or EU countries?
    • Editorial board (partly) from cOAlition S or EU countries?
    • Volume (e.g. papers per annum)

Of course we ourselves already made a selection by using WoS, and we fully recognize this practical decision leads to limitation and bias in the results. For a further analysis of inclusion of DOAJ journals in WoS per discipline, as well as the proportion of DOAJ journals in ESCI vs SCIE/SSCI/AHCI, see the accompanying blogpost ‘Gold OA journals in WoS and DOAJ‘.

To further explore bias in coverage, there are also other journal lists that might be worthwhile to compare (e.g. ROAD, EZB, JournalTOCs, Scopus sources list). Another interesting initiative in this regard is the ISSN-GOLD-OA 2.0 list that provides a matching list of ISSN for Gold Open Access (OA) journals from DOAJ, ROAD, PubMed Central and the Open APC initiative. It is especially important to ensure that existing (and future) publishing platforms, diamond OA journals and overlay journals will be included in any analysis of gold OA publishing venues. One initiative in this area is the crowdsourced inventarisation of (sub)areas within mathematics where there is the most need for Fair Open Access journals.

There are multiple ways in which the rough analysis presented here could be taken further. First, a check on specific Plan S compliant criteria could be added, i.e. on CC-license type, copyright retention, embargo terms, and potentially on inclusion of hybrid journals in transitional agreement. Many of these (though not the latter) could be derived from existing data, e.g. in DOAJ and SherpaRomeo. Furthermore, an analysis such as this would ideally be based on fully open data. While not yet available in one interface that enables the required filtering, faceting and export functionality,  a combination of the following sources would be interesting to explore:

  • Unpaywall database (article, journal, publisher and repository info, OA detection)
  • LENS.org (article, journal, affiliation and funder info, integration with Unpaywall)
  • DOAJ (characteristics of full gold OA journals)
  • SherpaRomeo (embargo information)

Ultimately, this could result in an open database that would allow multiple views on the landscape of OA publication venues and the usage thereof, enabling policy makers, service providers (including publishers) and authors alike to make evidence-based decisions in OA publishing. We would welcome an open (funding) call from cOAlition S funders to get people together to think and work on this.

 

Gates Foundation and AAAS – comments

Last week, Nature News reported on the termination of the open access agreement between the Gates Foundation and AAAS. Under this agreement, that lasted 18 months, the Gates Foundation paid a lump sum to have papers  by their funded authors  in AAAS-journals  (including Science) published open access,  in compliance with the immediate open access mandate of the Gates Foundation.

In preparation of the Nature News article, the author, Richard van Noorden, asked us for our comments on these developments and was able to use a quote in the final article.

For transparency reasons, and because we feel this is an important issue, we share our full comments below.

What would be interesting to know is the reason the agreement was not renewed – whether it was due to an inability to reach an agreement over costs for this specific arrangement, or whether it signifies a wish by either AAAS or the Gates foundation to switch gears.

There are two  main points why we are glad this agreement is discontinued:

  • The Gates Foundation is very likely paying exorbitant APCs per paper given the lump sum and the number of papers made OA, simply to be able to publish OA and gain the reputation these journals convey, while leaving everything else as it is, especially the reward system.
  • The agreement was setting a bad example of showing that everything can be had if you just throw enough money at it, while many researchers, institutions and countries are struggling to provide immediate open access at current prices.

It has probably done next to nothing among the glam journal publishers to raise awareness of the importance to switch to open access and hasn’t paved the way for researchers and institutions lacking the type of resources that Gates Foundation has.

Recently, we are seeing more activities by funders that indicate a desire to better control costs associated with funding Open Access publications. Wellcome is reviewing their open access mandate (also in light of the large difference in APCs for hybrid and full gold OA journals), the European Union has been floating the idea of no longer funding hybrid open access publications (in the Impact Assessment (part 2, page 107) for Horizon Europe), and Robert Jan Smits, special envoy Open Access of the EC is exploring an agreement with European National funders to require grantees to publish in open access journals, control APC costs and stimulate the flipping of subscription and hybrid journals to full OA (as presented July 11 at ESOF (see the video recording and the accompanying press release by Science Europe).

Perhaps the most interesting aspect of these developments is whether they can be accompanied by a shift in evaluation criteria (by funders, governments and institutions) that will lessen the stranglehold of high-impact journals such as Nature, Science, NEJM and PNAS that enables them to negotiate agreements as the one described for AAAS and Gates at such high costs, or resist making agreements over OA publishing altogether.  

So it will be interesting to see whether Gates will continue to enforce its open access policy unchanged despite the termination of the agreement with AAAS/Science. If funders will be steadfast in their determination to both move OA forwards and are able to contribute to a change in evaluation criteria, we might yet see a true watershed moment in scholarly publication.

There is one additional aspect regarding current initiatives by funders to exert influence over OA publishing and its accompanying costs, and that is of course the funder publishing platforms as implemented by (a.o.) Gates and Wellcome and as put forward for tender by the EC.  These could be seen as a strategic option by funders to stimulate OA under conditions they can control, and indirectly also as a lever to promote more systemic change in OA publishing.

Jeroen Bosman and Bianca Kramer

 

 

 

 

 

 

 

 

 

 

 

Stringing beads: from tool combinations to workflows

[update 20170820: the interactive online table now includes the 7 most often mentioned ‘other’ tools for each question, next to the 7 preset choices. See also heatmap, values and calculations for this dataset]

With the data from our global survey of scholarly communication tool usage, we want to work towards identifying and characterizing full research workflows (from discovery to assessment).

Previously, we explained the methodology we used to assess which tool combinations occur together in research workflows more often than would be expected by chance. How can the results (heatmap, values and calculations) be used to identify real-life research workflows? Which tools really love each other, and what does that mean for the way researchers (can) work?

Comparing co-occurences for different tools/platforms
First of all, it is interesting to compare the sets of tools that are specifically used together (or not used together) with different tools/platforms. To make this easier, we have constructed an interactive online table (http://tinyurl.com/toolcombinations, with a colour-blind safe version available at http://tinyurl.com/toolcombinations-cb) that allows anyone to select a specific tool and see those combinations. For instance, comparing tools specifically used by people publishing in journal from open access publishers vs. traditional publishers (Figure 1,2) reveals interesting patterns.

For example, while publishing in open access journals is correlated with the use of several repositories and preprint servers (institutional repositories, PubMedCentral and bioRxiv, specifically), publishing in traditional journals is not. The one exception here is sharing publications through ResearchGate, an activity that seems to be positively correlated with publishing regardless of venue….

Another interesting finding is that while both people who publish in open access and traditional journals specifically use the impact factor and Web of Science to measure impact (again, this may be correlated with the activity of publishing, regardless of venue), altmetrics tools/platforms are used specifically by people publishing in open access journals. There is even a negative correlation between the use of Altmetric and ImpactStory and publishing in traditional journals.

Such results can also be interesting for tool/platform providers, as it provides them with information on other tools/platforms their users employ. In addition to the data on tools specifically used together, providers could also use absolute numbers on tool usage to identify tools that are popular, but not specifically used with their own tool/platform (yet). This could identify opportunities to improve interoperability and integration of their own tool with other tools/platforms. All data are of course fully open and available for any party to analyze and use.

tool-combinations-079-topical-journal-oa-publisher

Figure 1. Tool combinations – Topical journal (Open Access publisher)

Tool combinations 078 - Topical journal Trad publisher.jpg

Figure 2. Tool combinations – Topical journal (traditional publisher)

Towards identifying workflows: clusters and cliques
The examples above show that, although we only analyzed combinations of any two tools/platforms so far, these data already bring to light some interesting differences between research workflows. There are several possibilities to extend this analysis  from separate tool combinations into groups of tools typifying full research workflows. Two of these possibilities are looking at clusters and cliques, respectively.

1. Clusters: tools occurring in similar workflows
Based on our co-occurrence data, we can look at which tools occur in similar workflows, i.e. have the most tools in common that they are or are not specifically used with. This can be done in R using a clustering analysis script provided by Bastian Greshake (see GitHub repo with code, source data and output). When run with our co-occurrence data, the script basically sorts the original heatmap with green and red cells by placing tools that have a similar pattern of correlation with other tools closer together (Figure 3). The tree structure on both sides of the diagram indicates the hierarchy of tools that are most similar in this respect.

survey_heatmap_p-values_2-tailed_coded_RG_white_AB.png

Figure 3. Cluster analysis of tool usage across workflows (click on image for larger version). Blue squares A and B indicate clusters highlighted in Figure 4. A color-blind safe version of this figure can be found here.

Although the similarities (indicated by the length of the branches in the hierarchy tree, with shorter lengths signifying closer resemblance) are not that strong, still some clusters can be identified. For example, one cluster contains popular, mostly traditional tools (Figure 4A) and another cluster contains mostly innovative/experimental tools, that apparently occur in similar workflows together. (Figure 4B).

clusters_examples

Figure 4. Two examples of clusters of tools (both clusters are highlighted in blue in Figure 3).

2. Cliques: tools that are linked together as a group
Another approach to defining workflows is to identify groups of tools that are all specifically used with *all* other tools in that group. In network theory, such groups are called ‘cliques’. Luckily, there is a good R-library (igraph) for identifying cliques from co-occurrence data. Using this library (see GitHub repo with code, source data and output) we found that the largest cliques in our set of tools consist of 17 tools . We identified 8 of these cliques, which are partially overlapping. In total, there are over 3000 ‘maximal cliques’ (cliques that cannot be enlarged) in our dataset of 119 preset tools, varying in size from 3 tot 17 tools. So there is lots to analyze!

An example of one of the largest cliques is shown in Figure 5. This example shows a workflow with mostly modern and innovative tools, with an emphasis on open science (collaborative writing, sharing data, publishing open access, measuring broader impact with altmetrics tools), but surprisingly, these tools are apparently also all used together with the more traditional ResearcherID. A hypothetical explanation might be that this represents the workflow of a subset of people actively aware of and involved in scholarly communication, who started using ResearcherID when there was not much else, still have that, but now combine it with many other, more modern tools.

cliques-example_colors

Figure 5. Example of a clique: tools that all specifically co-occur with each other

Clusters and cliques: not the same
It’s important to realize the difference between the two approaches described above. While the clustering algorithm considers similarity in patterns of co-occurrences between tools, the clique approach identifies closely linked groups of tools, that can, however, each also co-occur with other tools in workflows.

In other words, tools/platform that are clustered together occur in similar workflows, but do not necessarily all specifically occur together (see the presence of white and red squares in Figure 4A,B). Conversely, tools that do all specifically occur together, and thus form a clique, can appear in different clusters, as each can have a different pattern of co-occurrences with other tools (compare Figures 3/5).

In addition, it is worth noting that these approaches to identifying workflows are based on statistical analysis of aggregated data – thus, clusters or cliques do not necessarily have an exact match with individual workflows of survey respondents.Thus we are not describing actual observed patterns, but are inferring patterns based on observed strong correlations of pairs of tools/platforms.

Characterizing workflows further – next steps
Our current analyses of tool combinations and workflows are based on survey answers from all participants, for the 119 preset tools in our survey. We would like to extend these analyses to include tools most often mentioned by participants as ‘others’. We also want to focus on differences and similarities of workflows of specific subgroups (e.g. different disciplines, research roles and/our countries). The demographic variables in our public dataset (on Zenodo or Kaggle) allow for such breakdowns, but it would require coding an R script to generate the co-occurrence probabilities for different subgroups. And finally, we can add variables to the tools, for instance , classifying which tools support open research practices and which don’t. This then allows us to investigate to which extent full Open Science workflows are not only theoretically possible, but already put into practice by researchers.

See also our short video, added below:

header image: Turquoise Beads, Circe Denyer, CC0, PublicDomainPictures.net

Academic social networks – the Swiss Army Knives of scholarly communication

On December 7, 2016, at the STM Innovations Seminar  we gave a presentation (available from Figshare) on academic social networks. For this, we looked at the functionalities and usage of three of the major networks (ResearchGate, Mendeley and Academia.edu) and also offered some thoughts on the values and choices at play both in offering and using such platforms.

Functionalities of academic social networks
Academic social networks support activities across the research cycle, from getting job suggestions, sharing and reading full-text papers to following use of your research output within the system. We looked at detailed functionalities offered by ResearchGate, Mendeley and Academia.edu (Appendix 1) and mapped these against seven phases of the research workflow (Figure 1).

In total, we identified 170 functionalities, of which 17 were shared by all three platforms. The largest overlap between ResearchGate and Academia lies in functionalities for discovery and publication (a.o. sharing of papers), while for outreach and assessment, these two platforms have many functionalities that do not overlap. Some examples of unique functionalities include publication sessions (time-limited feedback sessions on one of your full text papers) and making metrics public or private in Academia, and Q&A’s, ‘enhanced’ full-text views and downloads and the possibility to add additional resources to publications in ResearchGate. Mendeley is the only platform offering  reference management and specific functionality for data storage, according to FAIR principles. A detailed list of all functionalities identified can be found in Appendix 1.

functionalities_euler

Figure 1. Overlap of functionalities of ResearchGate, Mendeley and Academia in seven phases of the research cycle

Within the seven phases of the research cycle depicted above, we identified 31 core research activities. If the functionalities of ResearchGate, Mendeley and Academia are mapped against these 31 activities (Figure 2), it becomes apparent that Mendeley offers the most complete support of discovery, which ResearchGate supports archiving/sharing of the widest spectrum of research output. All three platforms support outreach and assessment activities, including impact metrics.

functionalities_31activities

Figure 2. Mapping of functionalities of ResearchGate, Mendeley and Academia against 31 activities across the research worflow

What’s missing?
Despite offering 170 distinct functionalities between them, there are still important functionalities that are missing from the three major academic social networks. For a large part, these center around integration with other platforms and services:

  • Connect to ORCID  (only in Mendeley), import from ORCID
  • Show third party altmetrics
  • Export your publication list (only in Mendeley)
  • Automatically show and use clickable DOIs (only in Mendeley)
  • Automatically link to research output/object versions at initial publication platforms (only in Mendeley)

In addition, some research activities are underserved by the three major platforms. Most notably among these are activities in the analysis phase, where functionality to share notebooks and protocols might be a useful addition, as would text mining of full-text publications on the platform. And while Mendeley offers extensive reference management options, support for collaborative writing is currently not available on any of the three platforms.

If you build it, will they come?
Providers of academic social networks clearly aim to offer researchers a broad range of functionalities to support their research workflow. But which of these functionalities are used by which researchers? For that, we looked at the data of 15K researchers from our recent survey on scholarly communication tool usage. Firstly, looking at the question on which researcher profiles people use (Figure 3), it is apparent that of the preselected options, ResearchGate is the most popular. This is despite the factor that overall Academia.edu report a much higher number of accounts (46M compared to 11M for ResearchGate). One possible explanation for this discrepancy could be a high number of lapsed or passive accounts on Academia.edu – possibly set up by students.

survey_profiles

Figure 3. Survey question and responses (researchers only) on use of researcher profiles. For interactive version see http://dashboard101innovations.silk.co/page/Profiles

Looking  a bit more closely at the use of ResearchGate and Academia in different disciplines (Figure 4), ResearchGate proves to be dominant in the ‘hard’ sciences, while Academia is more popular in Arts & Humanities and to a lesser extent in Social Sciences and Economics. Whether this is due to the specific functionalities the platforms offer, the effect of what one’s peers are using or even to the name of the platforms (with researchers from disciplines identifying more with the term ‘Research’ than ‘Academia’ or vice versa) is up for debate.

survey_profiles_disciplines

Figure 4. Percentage of researchers in a given disicpline that indicate using ResearchGate and/or Academia (survey data)

If they come, what do they do?
Our survey results also give some indication as to what researchers are using academic social networks for. We had ResearchGate and Mendeley as preset answer options in a number of questions about different research activities, allowing a quantitative comparison of the use of these platforms for these specific activities (Figure 5). These results show that of these activities, ResearchGate is most often used as researcher profile, followed by its use for getting access to publications and sharing publications, respectively. Mendeley was included as preset answer option for different activities; of these, it is most often used for research management, following by reading/viewing/annotating and searching for literature/data. The results also show that for each activity it was presented as a preset option for, ResearchGate is used by most often by postdocs, while Mendeley is predominantly used by PhD students. Please note that these results do not allow a direct comparison between ResearchGate and Mendeley, except for the fourth activity in both charts: getting alerts/recommendations.

survey_presetactivities.jpg

Figure 5. Percentage of researchers using ResearchGate / Mendeley for selected research activities (survey data)

In addition to choosing tools/platforms presented as preset options, survey respondents could also indicate any other tools they use for a specific activity. This allows us to check for which other activities people use any of the academic social networks, and plot these against the activities these platforms offer functionalities for. The results are shown in Figure 6 and indicate that, in addition to activities supported by the respective platforms, people also carry out activities on social networks for which there are no dedicated functionalities. Some examples are using Academia and ResearchGate for reference management, and sharing all kinds of research outputs, including formats not specifically supported by  the respective networks. Some people even indicate using Mendeley for analysis – we would love to find out what type of research they are carrying out!

For much more and alternative data on use of these platforms’ functionalities please read the analyses by Ortega (2016), based on scraping millions of pages in these systems.

survey__reported_activities

Figure 6. Research ctivities people report using ResearchGate, Mendeley and/or Academia for (survey data)

Good, open or efficient? Choices for platform builders and researchers
Academic social networks are built for and used by many researchers for many different activities. But what kind of scholarly communication do they support? At Force11, the Scholarly Communications Working Group (of which we both are steering committee members) has been working on formulating principles for scholarly communication that encourage open, equitable, sustainable, and research- and culture-led (as opposed to technology- and business-model led) scholarschip.

This requires, among other things, that research objects and all information about them can be freely shared among different platforms, and not be locked into any one platform. While Mendeley has an API  they claim is fully open,  both ResearchGate and Academia are essentially closed systems. For example, all metrics remain inside the system (though Academia offers an export to csv that we could not get working) and by uploading full text to ResearchGate you grant them the right to change your PDFs (e.g. by adding links to cited articles that are also in ResearchGate).

There are platforms that operate from a different perspective, allowing a more open flow of research objects. Some examples are the Open Science Framework, F1000 (with F1000 Workspace), ScienceOpen, Humanities Commons and GitHub (with some geared more towards specific disciplines). Not all platforms support all the same activities as ResearchGate and Academia (Figure 7), and there are marked differences in the level of support for activities: sharing a bit of code through ResearchGate is almost incomparable to the full range of options for this at GitHub. All these platforms pose alternatives for researchers wanting to conduct and share their research in a truly open manner.

functionalities_open_networks

Figure 7. Alternative platforms that support research in multiple phases of the research cycle

Reading list
Some additional readings on academic social networks and their use:

Appendix 1
List of functionalities within ResearchGate, Mendeley and Academia (per 20161204). A live, updated version of this table can be found here: http://tinyurl.com/ACMERGfunctions.

functionalities-list-total

Appendix 1. Detailed functionalities of ResearchGate, Mendeley and Academia per 20161204. Live, updated version at http://tinyurl.com/ACMERGfunctions

Tools that love to be together

[updates in brackets below]
[see also follow-up post: Stringing beads: from tool combinations to workflows]

Our survey data analyses so far have focused on tool usage for specific research activities (e.g. GitHub and others: data sharing, Who is using altmetrics tools, The number games). As a next step, we want to explore which tool combinations occur together in research workflows more often than would be expected by chance. This will also facilitate identification of full research workflows, and subsequent empirical testing of our hypothetical workflows against reality.

Checking which tools occur together more often than expected by chance is not as simple  as looking which tools are most often mentioned together. For example, even if two tools are not used by many people, they might still occur together in people’s workflows more often than expected based on their relatively low overall usage. Conversely, take two tools that each are used by many people: stochastically, a sizable proportion of those people will be shown to use both of them, but this might still be due to chance alone.

Thus, to determine whether the number of people that use two tools together is significantly higher than can be expected by chance, we have to look at the expected co-use of these tools given the number of people that use either of them. This can be compared to the classic example in statistics of taking colored balls out of an urn without replacement: if an urn contains 100 balls (= the population) of which 60 are red (= people in that population who use tool A), and from these 100 balls a sample of 10 balls is taken (= people in the population who use tool B), how many of these 10 balls would be red (=people who use both tool A and B)? This will vary with each try, of course, but when you repeat the experiment many times, the most frequently occurring number of red balls in the sample will be 6. The stochastic distribution in this situation is the hypergeometric distribution.

memrise-heatmap

Figure 1. Source: Memrise

For any possible number x of red balls in the sample (i.e. 1-10), the probability of result x occurring at any given try can be calculated with the hypergeometric probability function. The cumulative hypergeometric probability function gives the probability that the number of red balls in the sample is x or higher. This probability is the p-value of the hypergeometric test (identical to the one-tailed Fisher test), and can be used to assess whether an observed result (e.g. 9 red balls in the sample) is significantly higher than expected by chance. In a single experiment as described above, a p-value of less than 0.05 is commonly considered significant.

In our example, the probability of getting at least 9 red balls in the sample is 0.039 (Figure 2).  Going back to our survey data, this translates to the probability that in a population of 100 people, of which 60 people use tool A and 10 people use tool B, 9 or more people use both tools.

hg-calculation-example

Figure 2 Example of hypergeometric probability calculated using GeneProf.

In applying the hypergeometric test to our survey data, some additional considerations come into play.

Population size
First, for each combination of two tools, what should be taken as total population size (i.e. the 100 balls/100 people in the example above)? It might seem intuitive that that population is the total number of respondents (20,663 for the survey as a whole). However, it is actually better to use only the number of respondents who answered the survey questions where tools A and B occurred as answers.

People who didn’t answer both question cannot possibly have indicated using both tools A and B. In addition, the probability that at least x people are found to use tools A and B together is lower in a large total population than in a small population. This means that the larger the population, the smaller the number of respondents using both tools needs to be for that number to be considered significant. Thus, excluding people that did not answer both questions (and thereby looking at a smaller population) sets the bar higher for two tools to be considered preferentially used together.

Choosing the p-value threshold
The other consideration in applying the hypergeometric test to our survey data is what p-value to use as a cut-off point for significance. As said above, in a single experiment, a result with a p-value lower than 0.05 is commonly considered significant. However, with multiple comparisons (in this case: when a large number of tool combinations is tested in the same dataset), keeping the same p-value will result in an increased number of false-positive results (in this case: tools incorrectly identified as preferentially used together).

The reason is that a p-value of 0.05 indicates there is a 5% chance the observed result is due to chance.  With many observations, there will be inevitably be more results that may seem positive, but are in reality due to chance.

One possible solution to this problem is to divide the p-value threshold by the number of tests  carried out simultaneously. This is called the Bonferroni correction. In our case, where we looked at 119 tools (7 preset answer options for 17 survey questions) and thus at 7,021 unique tool combinations, this results in a p-value threshold of 0.0000071.

Finally, when we not only want to look at tools used more often together than expected by chance, but also at tools used less often together than expected, we are performing a 2-tailed, rather than a 1-tailed test. This means we need to halve the p-value used to determine significance, resulting in a p-value threshold of 0.0000036.

Ready, set, …
Having made the decisions above, we are now ready to apply the hypergeometric test to our survey data. For this, we need to know for each tool combination (e.g. tool A and B, mentioned as answer options in survey questions X and Y, respectively):

a) the number of people that indicate using tool A
b) the number of people that indicate using tool B
c) the number of people that indicate using both tool A and B
d) the number of people that answered both survey questions X and Y (i.e. indicated using at least one tool (including ‘others’) for activity X and one for activity Y).

These numbers were extracted from the cleaned survey data either by filtering in Excel (a,b (12 MB), d (7 MB)) or through an R-script (c, written by Roel Hogervorst during the Mozilla Science Sprint.

The cumulative probability function was calculated in Excel (values and calculations) using the following formulas:

=1-HYPGEOM.DIST((c-1),a,b,d,TRUE)
(to check for tool combination used together more often than expected by chance)

and
=HYPGEOM.DIST(c,a,b,d,TRUE)
(to check for tool combination used together less often than expected by chance)

excel-sucks

Figure 3 – Twitter

Bonferroni correction was applied to the resulting p-values as described above and conditional formatting was used to color the cells. All cells with a p-value less than 0.0000036 were colored green or red, for tools used more or less often together than expected by chance, respectively.

The results were combined into a heatmap with green-, red- and non-colored cells (Fig 4), which can also be found as first tab in the Excel-files (values & calculations).

[Update 20170820: we now also have made the extended heatmap for all preset answer options and the 7 most often mentioned ‘others’ per survey question (Excel files: values & calculations)]

heatmap-2-tailed-at-a-glance

Figure 4 Heatmap of tool combinations used together more (green) or less (red) often than expected by chance (click on the image for a larger, zoomable version).

Pretty colors! Now what?
While this post focused on methodological aspects of identifying relevant tool combinations, in future posts we will show how the results can be used to identify real-life research workflows. Which tools really love each other, and what does that mean for the way researchers (can) work?

Many thanks to Bastian Greshake for his helpful advice and reading of a draft version of this blogpost. All errors in assumptions and execution of the statistics remain ours, of course 😉