More from TU:Librarian
- March 2017
- February 2017
- January 2017
- December 2016
- October 2016
- June 2016
- April 2016
- March 2016
- February 2016
- December 2015
- September 2015
- July 2015
- April 2015
- January 2015
- December 2014
- September 2014
- July 2014
- May 2014
- March 2014
- January 2014
- December 2013
- November 2013
- October 2013
- July 2013
- June 2013
- April 2013
- March 2013
- February 2013
- December 2012
- November 2012
- October 2012
- August 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- September 2011
- July 2011
- June 2011
- May 2011
- March 2011
- February 2011
- January 2011
- December 2010
- November 2010
- October 2010
- August 2010
- July 2010
- June 2010
- May 2010
- April 2010
- March 2010
- February 2010
- December 2009
- October 2009
- September 2009
- July 2009
- June 2009
- April 2009
- March 2009
- January 2009
- November 2008
- October 2008
- September 2008
A different way of an annual retrospect. This time I asked our Library staff a few days in advance of our Christmas drinks to name a few remarkable events (“things that you are proud of”) that happened in 2015. My last 2015 blog is dedicated to the events mentioned, that all seem to have a focus in the last months. I mentioned that I would put the contributions in this weblog, and that I would share the pictures. I apologize that I do not refer to all creators.
Library tour app
…we did it!
.. our team was so diverse, it was fun and everybody was committed and did just what was needed
.. we involved the end-user right from the start
.. we finally did something with mobile
.. we received so many positive responses
.. there is still a lot of potential for students to use the app
.. projects like these give our R&D team so much energy
On December 15 this year our Board has approved the plans outlined in our report “activating academic heritage Delft University of Technology”, with which the Library can continue maintaining this heritage and making it accessible, but may also exploit it to strengthen the TU’s collective identity and standing. We will be able to attract dedicated staff for these activities. In 2016 the heritage collection will be transferred to our book depot – a lot of our staff worked extremely hard and efficient to make this possible.
We had a positive end-of-the-year, where our Open Science programme made good progress, and we got budget to get proper data research data management set-up with our faculties, to promote our how-to-guide, to stimulate open access via our open access fund. We are also happy that we continue to provide access to our Elsevier journals, whilst at the same time making true steps to open access with this publisher.
A big achievement this year was our transition from Aleph to Worldshare Management Services, and from our own Discover environment to Worldcat Discovery. We call the new library management system Nubes. All people involved deserve a big compliment: we have managed to deliver in time, in budget, and within the coming weeks we hope to have solved some pending loose ends.
We have never registered as much as this year in our (to be replaced in 2016 to Pure) current research information system METIS, and uploaded more than ever in our TU Delft repository. Great!
In October we moved from Sharepoint 2010 to 2013, in total 200 teamsites were migrated. This really went smoothly, amongst others due to a good preparation, and a great cooperation between our own functional application managers and the product managers from ICT.
Office in your bag
The past two weeks you could find a Christmas tree in our “Praethuys”. Everybody can put a wish in the tree, related to what you think you might need as office-App. Because your smartphone can do everything for you, to communicate, to find stuff, to make a shopping list or check your account. But what App could help you and make you more productive at work?
And in December we showed in our Library Learning Centre how flexible we are, when hosting a Phd Defence because there was a (fortunately small) incident in the Auditorium where the PhD Defences normally take place.
So within a few days we created a quick and certainly not complete overview of our 2015. And now we are up to enter the millennium’s sweet sixteen birthday party. We are starting to become grown-ups in this millennium, so less things that may surprise us, less things that we can just deny, we might realize that the world is amazing, and we can only get it better if we really try.
I spent a few days in Paris, France, with my research data colleagues, almost 600 participants from 38 countries, who gathered for the 6th RDA Plenary. This RDA (research data alliance) focused on the need to work with enterprises, and had as underlying theme the climate change.
That was the reason that Barbara Ryan (Secretariat Director, Group on Earth Observations) held a keynote on the first day. She was not just focusing on the climate change per se, but explained how she managed to get their data open, and the effect that this has had on usage. “Countries have borders, earth observations have not.”
We were all impressed with the statement that Axelle Lemaire (Minister of State for Digital Technology, French Ministry of Economy, Industry and Digital Technology) made at the start of the conference. She preferred to use the metaphor of light instead of oil, when talking about data. Data is not a fossil source that might run dry, data is around in many forms, sometimes a bit diffused, but crucial and it needs to be shared to create value. She told us that France will launch a public consultation on 26 September about “the Digital Bill”. A delightful presentation.
At the Plenary day I attended (there were three days in total) on 23 September, I was especially curious to see how working groups that I attended before, had progressed. So I attended the Publishing Data Workflows and the Data Citation Groups. The first group gave us a link to their article, and sample cases where either Dataverse, Dryad or figshare are used in the publisher’s data workflow. The future work will concentrate on moving forward in the research process, and analyse how processes for data publishing might work there. The working group invites everybody to give their best practices, thoughts and comments.
I think that we as libraries should realize that this is indeed what publishers are doing now (just also notice the press release announced at the RDA meeting about Mendeley Data and DANS). If we support our researchers with their data management plans and data stewardship, we can advise them how to keep, store and share their data, without giving the content away. I thought that the remark by William Gunn from Mendeley on the workshop a day before was reassuring “All types of content providers need to focus on value-added services and not paywalls”.
The Data Citation Working Group will shortly report on their 14 recommendations. The idea of RDA was that working groups only work for 18 months on a certain topic and that the group dissolves, and new groups emerge again. The difficulty here is that people like to continue their work, either because they feel committed to their legacy, or because there are many more ideas or recommendations to explore or make. New for me in this session was the “query store” as a middle man (you need to be able to reproduce your queries, so you give them a persistent identifier, but you also need to be able to retrieve the same data with that query, so you version your data with a timestamp). I also learned that data can be watermarked or carry fingerprints, as a protection layer (this related to data from social insurance providers for doctors and hospitals). Another term often used, was a “snapshot”: a version is a snapshot of your database. And I think it was Stefan Proll (but perhaps was it somebody who asked him some questions) said: “If users do not cite your data, cite your users”.
I already referred to the workshop on the day preceding the RDA, that was on e-Infrastructures & RDA for data intensive science. There was some overlap between these two days. One I did not mind at all. A very nice tool, called RD Switchboard, presented by Amir Aryani from ANDS (Australia). This switchboard is connecting datasets on the basis of co-authorship or other collaboration (e.g. via funding). Paolo Manghi showed that they already work together with the RD Switchboard by finding connections via the OpenAire database, between publications and projects and publications and data.
Mark Parsons, the secretary general of RDA talked (amongst other funny stuff) on infrastructures during the opening session of the preceding day. How we went from systems, to networks to networked infrastructures. Infrastructures are about bridges, both social and technical, and that is what RDA wants to do, creating bridges, and be open! “Preserve the freedom to tinker, that is why choice for open source is important.”
My Paris RDA trip started even a day before that, with the persistent identifiers workshop, organized by DataCite and ePIC. ePIC stands for persistent identifiers for eResearch, and is working on data in the full research cycle (what they call referrable data), whereas DataCite provides identifiers to citeable data. At the workshop there were presentations about identifiers such as ark, doi, handle, orcid and isni. For domain-specific work identifiers are often also needed, Anne Cambon-Thomsen started a journal for descriptions on Bioresources and Kerstin Lehnert introduced the igsn, the geosample number.
And we are not yet there, we want to use identifiers for more physical objects, we should always make sure that we refer to the pid in the metadata, and according to Peter Wissenburg, we should also use identifiers for the metadata. It is obvious that the most important thing is that these persistent identifiers are linked across platforms, and that we have an open scholarly infrastructure. A project about this, has just started, “Technical and Human Infrastructure for Open Research”: THOR. Tobias Weigl even wanted to bring it further: “We need an operational transition process. Go from one pid to the other. That is not possible yet.”
New for me was in the presentation by Laura Paglioni from ORCID that they will come with review information in your ORCID profile, and she showed that there is already a dataflow between CrossRef, DataCite and ORCID.
So even though I could not attend the full Plenary, enough inspiration as a take-away!
I attended the first two days of the IATUL conference 2015, on Strategic partnerships for access and discovery, on July 6 and 7, 2015. It has been a while that I visited an IATUL conference (the last time was in Auckland, New Zealand, in 2008), so it was nice that I was invited by TIB Hannover to give a keynote and be able to attend the meeting in Hannover, Germany.
IATUL celebrated its 60th anniversary, and last year the General Assembly had decided that members from other university libraries than technical ones are now also welcome to IATUL membership. Though of course at this conference there were still mainly representatives from technical libraries (which I like, perhaps because it potentially can provide a more focused flavour, though in the end we all have the same challenges of course). Let me mention a few observations I made during these first two days:
- The obvious topic of open access was immediately addressed on the first day by Martin Hofmann-Apitius, who gave a very explicit case and reason why publishers should allow automatic text mining and by doing so would save the lives of cancer patients. In the panel afterwards he said that he thought scientific youth has been spoiled by Google: “We should come back from the hype, and define what solid-based literature search means”.
José Cotta (from CONNECT, Directorate-General of the European Commission), the second keynote speaker, phrased open science as a “democratisation” of science, and explained that free flow of data is necessary for a digital single market. He divided open science as follows: E-infrastructures for open science; Open access to research results & processes; Evidence-based policy making (Global systems science); and Public engagement (citizen science, crowdsourcing). Cotta further referred to the blogpost Moedas and Oetinger jointly wrote, and ended with his statement that we need to “catalyse a change in culture”. (I had already heard the update on the pilot for research data Cotta gave.)
- Look out for the English translation of a public paper from ETH Zurich about the strategy for their collections (summary is already available). They identified the following four functions of scientific collections: Research; Teaching; Transfer of knowledge to the public; and Preservation of cultural heritage.
- Frank Seeliger (from Library TH Wildau, Germany) showed some nice visual presentations in the (digital or physical) library environment, or as he puts it: “making knowledge and science tangible”. They invite their professors to the library with this tagline: “Come to us to play”. Twice these two IATUL days a reference was made to the fluid library, i.e., the library can be where you want it. The tangible example at Wildau: every table in the library has an rfid reader, and when a book is put there, it can stay there, and be available (because it is localised).
- Simone Fühles-Ubach (Cologne University of Applied Sciences, Germany) explained a model for a library strategy, where the focus is on what the user wants to do instead of what they want. It is called the Openstrategies PRUB-model, because libraries run Projects to produce Results which customers and citizens Use to create Benefits.
- Wolf-Tilo Balke made it clear that libraries can (or should) connect information science and computer science, and combine this with their knowledge of the subject domain. He believes that this is necessary because otherwise people are lost in the information overflow, and “taking something out of the collection is the same as putting something on the 2nd page of Google”.
- I have made notes to take a better look at WorldWideScience.org who presented themselves as “a global science gateway comprised of national and international scientific databases and portals”. I must also investigate the impressive overview of research support services Hester Mountifield (University of Auckland, New Zealand) provided in her talk called “Through power of collaboration. How we increased our impact by helping researchers to increase theirs”. And I should have a better look at the circle (slide 20 in the linked pdf at this page) of the (physical) library functions Brian Irwin showed in his joint talk with Sharon L. Bostick.
So enough to get back to when I am in Delft. I did like the meeting and the talks with the attendees I had. The organisation was very “light”, which made it pleasantly informal, and there was also a lot of diversity in the topics, (perhaps some more focus in the total programme would have made it even better).
I have already given some attention to my own keynote via slideshare and twitter, but I would like to repeat very briefly what I feel is important for us librarians. We know why we find open access important, because we want everybody to have easy access to research findings. We should work together to reach this goal, and be Flexible, Assertive, Innovative and Realistic while doing this. If you want to know more, just reach out (and I’ll be there;-).
While we are implementing (hopefully more and more) open access deals in The Netherlands (e.g. with Springer) there is one thought that crossed my mind. For quite some time funding agencies support the open access publication of scientific articles. This is of course what we want, because we want knowledge to flow and our economy to grow. However … is all the extra funding in open access publications a sensible way forward? The reason I am asking this is because of the upcoming offset deals. We (institutes, libraries) have put money in the publishing system in the old days for access, for reading (and of course for more services the publishers offered alongside in the digital age). We have created a movement to (try to) change to a system where we (will) pay for the publication production part, and reading or access is “for free”. There is a lot to read about the total cost of publishing, and how open access will cost us much more than we pay for “closed” access right now.
And where does or will the extra money we need come from? From the funding agents, research grants, other faculty budget, or perhaps the government (the latter is not the case in The Netherlands). We are creating complexity in the world of publishing, we are finding ways to fund open access, where it perhaps could be as simple as it used to be. Apart from member or personal subscriptions, the major proportion of costs for the reading access came from the institutional (library) budgets. There was no budget for this from funding agencies; if you wanted extra content, not covered by the “big deals”, you needed to arrange this with your librarian, and publishers marketed their content via their libraries and via the researchers (“ask your library to …”). So why are we changing this? If a library or consortium wants to have an open access arrangement for or alongside the subscribed content, you cannot “use” this extra money in the system. If I refer again to the Springer deal the money paid is (we accept that in this transition period) mainly for opening up our articles in hybrid journals, and it is not possible to fund this (what I do understand). However, we all know this extra money is there and exists, and we cannot ignore this during our negotiations. And this creates complexity.
Let there just be one budget for publishing (and for the sake of simplicity I leave out the surcharges that might be paid by the authors out of their own research budget, also in the old days), and have no, I say no, extra funding for open access. Too bad. Not because I am not in favour of Open Access. But because I am.
A few afterthoughts (and of course do not consider this blogpost as a well-thought-of and reviewed article, I am just writing it in the train):
- This total publishing budget should be the university’s. The library can keep it, but we need to make sure that authors understand what costs are involved in publishing.
- We need all stakeholders to make the transition happen, as I stated earlier, there is a true choice to decide to make progress in open access.
- We need to take care that open access is not adding a barrier for people to be able to publish (resesarchers who are not connected with research institutes that can afford this – though we had the same problem with subscriptions of course).
- In the transition phase funders can stimulate open access, but I think that funding apc’s is not the way to do this.
- The complexity is perhaps more created by traditional publishers that do not want to change models due to an uncertain future. We know that there are upcoming successful open access publishers with a changed business model.
I attended APE2015 for 75%, i.e., I missed the second afternoon, the sessions I attended got of course my full attention. My report touches upon some highlights, for full coverage, please check the recordings or slides that will be put online later.
Open access as a means, not a goal
Quite some presentations made of course reference to open access, starting off with Martin Grötschel, incoming President of the Berlin Brandenburg Academy of Sciences, who stated that “everything should be available at your fingertips” and that “openness is the best possible way to foster high quality science”. And ending (well at least for me) with Jan Velterop, as independent Advocate and Advisor on open access and open science: “The goal is optimal dissemination of knowledge”. He referred to the term “lamp post research”, because with the publication overload researchers might only be looking where the light is (but not necessarily where the key is). I come back to that later. Celina Ramjoué, Head of Sector, Open Access to Scientific Publications and Data, European Commission, talked about Open Science. She also emphasized the importance of free circulation of knowledge, and explained the actions of her department by four issues, i.e., 1. e-infrastructures (big data); 2. evidence-based policy making; 3. open access to research results and processes; and 4. citizen engagement .
The dotcoms-to-watch session Eefke Smit was chairing carried the title “sharing is multiplying”, which also nicely catches the goal of open access in it, resembling our library’s vision that “if you share, you grow”. For circulation of samples, Olivier Acher (www.sampleofscience.net) wants to connect scientists creating samples with scientists who can use them. Descriptions are put in journal Sample of Science, and each disseminated sample becomes a citable item. I did remember readcube.com, but I did not realize that there were the launching partner of the shareable, read-only articles that Nature Publishing Group has on its website and that can be used for sharing peer to peer and media referral as Nicko Goncharov (Digital Science) talked about. David Sommer (Kudos): was back from last year, and after their launch in April 2014 they have 29000 author registrations. Authors can explain, enrich, share and measure what happens with their articles. New is an institutional partnership and proof that enriched articles are read and cited more.
From knowledge maps to knowledge vault
Hans Uszkoreit, Professor of Computational Linguistics, Saarland University at Saarbrücken, referred to the highest level of offering information resources, with new structured knowledge. Uskoreit, Jan Velterop and David Wade referred to knowledge maps or graphs. How these are already the basis of current Google or Bing services, and how they can be improved or sharpened by adding other “more closed” content. As Velterop put it: “Make sure that we at least get access to meaningful stuff in the articles. The more we have, the sharper the knowledge picture.” Wade is Director Scholarly Communication at Microsoft Research. He had some advice for the (mainly publishers) audience: “get crawled and indexed, get a sitemaps.xml, and mark up your content”. Interestingly he showed us that Microsoft is using these knowledge maps “behind the screen”, and that a new feature of Word is a direct reference to Word online (a sort of extended autosuggest), and you can also search for online pictures (with CC-BY licenses) in Powerpoint or search for data via an online search in Excel.
Making it happen, reaching our goal
Obviously if you want something, you should try to get there, and there might be more ways to reach your goal than you can think of yourself. In The Netherlands we have just reached an agreement with Springer. At APE Veronika Spinka (Open Access Manager) showed us how Springer is working on automating the process of identification and verification of author/institution for the apc’s deposit. Richard Wynne, Vice President of Sales and Marketing, Aries Systems Corporation, explained that we really should leave metadata and business rules separate from each other, as was the case with the subscription model. Strikingly I thought was the author landing page Jake Kelleher (Senior Director of Licensing and Business Development, Copyright Clearance Center) showed us, where an author (or institute) can see what the agreed fee to be paid is, consisting of an apc and all sorts of surcharges (for supplementary data, colour and CC-BY license).
Ramjoué made reference to a possibility the EU now created where apc’s are also applicable for two years after a project has ended (in relation with FP7).
Frederick Dylla (see also last year) updated us on CHORUS: they now have 100 plus signatories and it is growing. CHORUS builds on existing infrastructure, and consists of a landing page (popular term, I must say!) for public access on the publisher’s site and progress can be followed via a live dashboard. In the US they chose the green road, and CHORUS is the publisher’s solution. For the library side of it, there is SHARE (working together, with CHORUS, e.g. on identifiers), the shared access research ecosystem.
(I need to) Come back to this later, or remember these phrases!
- Research pad (convert all open content to ePub format)
- Utopiadocs.com (I already heard about it, but should check it again, about “resurrecting knowledge”)
- Corona, your personal digital research assistant (Microsoft)
- Uszkoreit : “Owners of the texts do not own the facts”
- Dirk Pieper referring to an open access clearing centre to pay for apc’s and a landing page for their authors/publications
- Phil Archer (W3C): “a book is a broken, dead thing for youngsters.” “Semantics matters, otherwise machines cannot read it efficient, if models are different, again you make it difficult for machines”
- Kent Anderson about what peer review is about: “Is this new, done well, important? First, best or last?”, and provoking the publishers: “Investing in peer review is investing in your core function”
- Velterop: “You don’t get answers, but hints”
- Follow what happens in the CC-BY discussion, is it true that authors need to pay more in this license? Should we add ND to this?
I have been writing about the idea of a living campus for some time now. And now it is that time of the year again. To sit back, clean your laptop, read the blogs or posts you had kept aside and notes you have made the past year.
For a year-and-a-half the TU Delft Library has been enriched with the inspiring programmes Studium Generale is organizing as of 1946 at the TU Delft (SG is now part of the Library organisation). Their main target group is students, but other people interested are always welcome. In 2014 I have attended a few of their events.
The variety is huge, I listened to Desanne van Brederode talking about friendship at one of the monthly “broodjes filosofie”. When do you call somebody a friend, how come that a friendship lasts forever (or not) and when or how are friendships created? Or earlier this year I went to a thematic movie night at “filmhuis Lumen” , composed by Rolf Hut, where he invited Nick van de Giesen to talk about and watch documentaries with as main topic “food production”. Nick van de Giesen had a wonderful line that night about knowledge, and when we I asked him later by email to repeat it, this is sort of what we remembered it was (translated from Dutch): “Contrary to many other means of production knowledge-intensive production is not expensive, it is cumulative and can be transported freely throughout the world ”.
Only recently I attended one of the Sunday morning Van Leeuwenhoek sessions in the TU Delft Science Centre, this time provided by Andy van den Dobbelsteen. Of course Andy is now famous from the solar decathlon team with the Prêt-à-Loger home, but in this lecture he showed what he changed in his own home to make it more energy-friendly, though even for him the maximum allowed 10 MWph is still a heavy challenge!
And there is so much more happening in, at or around the TU Delft campus. I am looking forward to another year full of life at our campus.
It has been a while … one of the reasons that I lack time to write a blogpost is that I am (roughly) spending 2 full days per week on Open Access meetings, discussions and issues, next to the normal day-to-day business. All that work can be tracked down to the letter that our State Secretary Sander Dekker issued just over one year ago, in The Netherlands (English version).
The reason to take time to write now is that I just spent a day and a half in London, at the PASTEUR4OA meeting, where the Key Nodes of the Member States of the EU gathered, to discuss how they can work together to promote open access policy alignment in Europe. The idea is that we will form a network of expert centres that will help each other and work as a national liaison to the policy makers, for advice, exchange and implementation. I will not expand too much on this project or meeting, because results will of course be shared via the project website.
One deliverable of the project is an overview of current policies including a check whether they are compliant with the H2020 recommendation and an analysis what elements in these policies are the most effective ones. Perhaps most striking effective element was (and Bernard Rentier was present to explain this) in the mandate at the University of Liège. As Rentier phrased it: “Only publications in our repository are taken into account for internal evaluation”.
We could not leave the meeting without knowing what H2020 says on Open Access. To recap:
– Open access is mandatory for peer-reviewed publications
– Green open access is the “must”
– Grant holders are allowed to pay in gold open access
– Monographs are not mentioned, and
– Open data pilots are encouraged.
PASTEUR4OA will also be organizing policy meetings in the five EU regions they have created, two per region, one will focus on funding bodies, and one will focus on (research) institutional managers.
One of the advantages of spending some time with a rather small group of people involved in a common theme, is that you may encounter new people or new insights. There is also a disadvantage of having a common theme and a small group, and that is that people all know each other, and will not be able to find new insights or meet new people. At least for me that was, fortunately, not the case.
At the evening of our dinner, I ended up talking with Keith Jeffery and Melanie Imming. Keith asked me what I “had with Open Access”. And it struck me that – strangely – the question surprised me and I had to think a bit about this. Yes, of course there are the obvious arguments that a research institution is doing all the work (writing, reviewing and/or editing) and needs “quite a bit of” money to get access to their own work, and that this access is often limited to only a selected group that is able to pay for access. But what about me, why do I spend so much time on the Open Access issue?
I can explain this by referring to what our own rector Karel Luyben from Delft is saying: “..TU Delft is dedicated in making a significant contribution to finding responsible solutions to societal problems, at both a national and international level. Our mission is to deliver Science to Society. Open Science is an important way to spread our mission around the world.”
As TU Delft Library we are convinced that you (as researcher, student, teacher, but perhaps that this is a generic rule) will perform better if you use knowledge created by others and share your own. So knowledge should flow freely. Of course there are some prerequisites to be able to do this, e.g. the protection of intellectual property (dealt with in the creative commons licenses) or a sustainable infrastructure (via publishers, or via institutional repositories).
Anyway, that brings me to the other reason Open Access is a big chapter for me and my library director colleagues in The Netherlands. We are implementing what our State Secretary asked us to do last year:
“The agreements in 2014 should be based on the premise that publishers will make all their journals open access or that they are prepared to negotiate arrangements to offset article publishing charges with licensing fees in order to avoid double payment. Researchers should continue to have worldwide access to research publications.”
And so that is what we are committed to do, see the recent press releases on the negotiations with Elsevier and Springer. And we will need to make sure, together with the VSNU, that we keep things manageable, sustainable, and most preferably accessible for our researchers. Exciting times!
This time I am wrapping up the “International Data week” in Amsterdam, with the RDA 4th plenary (Reaping the fruits) as main event on 22-24 September 2014, and a range of satellite events on data were taking place in the same week. Just a (very) short impression!
Robert-Jan Smits kicked off the RDA meeting on Monday, where 520 attendants were present, by saying that only 10-30% of scientific articles can be reproduced. He urged the community to change their culture, and “treat your data as you treat your publications”.
The video by Neelie Kroes contained a few nice phrases, e.g. “Open science depends on open minds, and it can grow if we build it upon trust”.
Barend Mons held a very entertaining keynote on “Bringing Data to Broadway”, and introduced his FAIR play, to make research findable accessible, interoperable and reusable. Barend referred to his Data FAIRPORT. Do not say open all the time, perhaps call it fair science (I will give this suggestion at the end of the EC public consultation on Science 2.0!).
He showed us that data loss is real and significant, while data growth is staggering. We should realize how important data stewardship is: Educate, reward and keep data scientists. Professionalize data stewardship! 5% of research funding should go to data stewardship, it is really worth the money. So award the data steward, introduce a research object impact factor. And do not forget: “Knowledge is like laughter, it increases when shared”.
I could only attend this first day partially and then the third day. The RDA always holds a lot of parallel sessions, similar to the previous plenaries, where the interest groups and working groups talk about their challenges and progress.
The working group on workflows (part of the interest group Publishing Data) is in the midst of a workflow analysis, and they called for people to look at their Excel sheet, add new workflows or columns to address. A few examples of workflows were presented, Martina Stockhause opened a discussion on versions of data, where her suggestion was to have a high-level persistent identifier based on a collection, and then allow for changes within. We thought that her discussion would be addressed by the group on Dynamic Data (I cannot find the correct link to this group though!).
The closing panel on the third day gave an overview of the data situation in Brasil, Japan, Canada and the US. A few interesting, some slightly contradictory, observations:
- Should we refer to open data, or should we make a variety of how access can be arranged, realizing that private sector wants to exploit their data?
- Do not create artificial silos between research and industry.
- Data requires us to think in objects and connections, and we should work on improving services.
- Beware to be “going in the rathole of sustainability”. At the end it is of course far more expensive not to invest in infrastructure.
The coming six months (to the next plenary, in San Diego) the RDA will focus on adoption, to be using and eating the fruits, and they will be clustering the interest groups and working groups. I think that this is a sensible thing to do.
One of the remarks of the panel was that you need a national infrastructure to be able to participate in a global infrastructure, and that we should exchange best practices. I am proud that we managed in the Netherlands to have Research Data Netherlands, a coalition where now three data archives are sharing their experience and work together on realizing sustainable data archiving.
Talking about the processes is useful and necessary, but it was very rewarding to have presentations of six researchers during the Dutch Data Prize Award on 24 September.
On Thursday the RECODE Workshop had a meeting (and there were as said much much more interesting events this week). RECODE aims to have their final conference in Athens in January 2015. People at the workshop were invited to comment on the draft recommendations document of work package 5.
The group wants to produce evidence-based policy recommendations. They have identified four stakeholder groups, funders, research institutions, data managers and publishers (question was raised whether researchers should be added as stakeholder). To give a quick idea:
- Funders: Develop, implement, monitor and evaluate open access to research data. (During the panel later on, we discussed whether there was a funder that supports reusing data, that could be an addition to this short list.)
- Research institutions: Develop data management strategies, develop reward systems, develop training programs and support awareness-raising.
- Data managers: Develop mission and responsibilities, develop sustainable business models, achieve trust worthiness of repositories and content, and develop data management services.
- Publishers: Get policies for deposit of data and require data submissions in certified repositories.
Daniel Spichtinger (from European Commission, DG Research and Innovation) took part in the workshop and told us about the European Commission’s pilot for open access to research data. A few things were new for me, apparently the deposit in repositories is mandatory, but there is no requirement to have it in a trusted repository. The opt-outs for opening up your data have a wide range: there may be a conflict to protect results, a confidentiality issue or possible risk for national security, protection of personal data, and more. Another new thing for me was that apart from the selected areas (in the Excellence, Industrial Leadership or Societal Challenges programmes) all projects might go for a pilot on a voluntary basis. Further the data management plans are mandatory, but are not part of the project evaluation, they are required 6 months after project starts. At the end Daniel gave a nice quote: “This pilot gives you a chance to coshape policy on opening up research data.“ We also now know the take out so far (out of 3054 proposals): opt out is 24% in core areas, and 27% is the opt in, in other areas.
I am ending my post here, but our team, especially the product group Research Data Services, were of course in (almost) full-strength present, and apart from helping the main organisation DANS, sponsoring as 3TU.datacentrum (which we coordinate) the programme, we followed or contributed to Libraries for research data, Data publication, Long tail data and workshops on technique, training, policy and certification. A very busy week indeed!
Reporting on the ALPSP international conference, 10-12 September 2014.
While I am at this (learned society) publishers’ congress, our Board of Directors have sent on September 11 a message to all employees at TU Delft announcing that the way forward is open access. Open access so that contributions to science are spread, read and re-used. In the Netherlands, State Secretary Dekker recently expressed his views on the transition of Dutch academic publications to Open Access, which he hopes to achieve within five years in 60% of the cases.
A few days in London, that’s what you think. However, the ALPSP meeting is based in a hotel (Park Inn) at Heathrow, so the only thing you see (and hear) is the airport.
Keynote opener was Amy Brand from Digital Science. She presented an overview of the products (a lot fall under Digital Science) that help taking away pains from the researchers. “Pain is the mother of invention”. Products or services I had not heard of before were: Sparrho, Sciencescape and Uberresearch.
A bit about libraries
The first day (being only the afternoon) held (after the keynote) two things. A library panel (I think that those were the only librarians, apart from me, present) and the presentations of the Innovation Award (winner to be announced Thursday September 11). Surprise surprise, I liked the library panel. The topic was about whether we (librarians / publishers) were competitors or customers. Jill Taylor-Roe asked us to recalibrate the collaboration. Would it be better together? Graham Stone tried to tickle the audience (as you will understand mainly publishers) by explaining that repositories actually help in impact. “It is not about stealing. We have paid, we link and drive usage”. At his university (Huddersfield) they now have their own Press, and publish their undergraduate journal with severe peer reviewing. As Jill also pointed out, Graham said that libraries and publishers are both contributors (not competitors) to advance science. The really important people are the authors, not the librarians or the publishers.
Seven nominations for the Innovation Awards, 5 minutes each. Wow! I liked (but information might not be complete;-) BioRXiv , initiated by Cold Spring Harbor Laboratory Press, as I understand a good filter to proceed to journal publications. Another success was presented by Frontiers from Open Science platform, where open access journals are being (openly) peer reviewed, with top scientists as Editors, and maximising impact for authors. They even started a Frontiers for young minds! With an Editorial board of kids. Last one I mention is Edifix (from Inera), where inaccurate references are something from the past.
Impact and relevance
A panel opened the next day, but it was lively enough to attend. I especially appreciated the contributions from David Smith (IET) and chair Toby Green (OECD). Take a look at OECD’s Freemium Access publishing, more or lest the differentiated access Amy Brand presented the day before. It is a mixture of free (just reading) and premium (paying, for downloads or usage). With David I had a nice chat afterwards, where we discussed the typical behaviour of engineers i.e. IET noted having a very low percentage of mobile traffic on their platform. I spoke also with the panellist from Thieme who claimed that open access would leave libraries without a job. What we see in Delft is that we advise in the publication route and start to administer apc’s, so I did not agree with his observation.
Melinda Kenneway opened an interesting session on metrics. I had heard Mike Taylor from Elsevier at the APE, but his story this time was different, he advocated the use of multiple or mixed metrics, not bibliometrics or altmetrics, but choosing the right answers based on the questions of the customers. We also had the joy of listening to Euan Adie, founder of Altmetric. He defines altmetrics as everything in metrics that is not citations, resulting in a broader view of impact. An idea that popped up in my head when he talked was whether we (as librarians) should be starting to tweet much more about the Delft publications, as part of our workflow.
In the afternoon presentations on “cracking the discovery code” were scheduled. where EBSCO, Sage and Graham Stone (again, see day before) talked. Graham’s university was the first Summon client in the UK (2009). I liked the way he referred to the discovery system as a tool that is “levelling the playing field, giving every journal a fair chance” and warned us, librarians, to avoid the desire making minilibrarians out of our users. I also made a note to look at the TERMS – top 14 deal breaks when licensing electronic resources, that he cocreated, so I will definitely get back to his presentation.
Fiona Murphy chaired the final session of the day on big data. There was in that session not much new for me, though it struck me that the underlying message was that we should be careful. We should not make confusing correlations, will need human interaction to structure the data, and need to make authors aware how to correctly cite data. I learned from my neighbor, one of the nominees for the Innovation Award from Inera, that 20% of the provided data citations have incorrect doi’s. We can of course also have fun, e.g. with the autocomplete text Google makes (why is UK so … cold).
Text mine or yours or …
On Friday morning we had our final sessions. Both were interesting. Gemma Hersh from Elsevier explained their Text and Datamining policy and went through the criticism received, via a.o. LIBER. They have changed a few things since, e.g. users are not asked anymore to provide a project description when they register for mining. The fact that registration is needed however will not be changed, according to Hersh. CrossRef is now working on a cross publisher solution. The talk by researcher Lars Juhl Jensen (here is his blog) nicely touched upon some of these issues. Jensen gave a wonderful talk about what text and data mining is all about, and that researchers like himself just want to be able “to take it, mine it and make it publicly available”. Furter reading about this (and other) sessions at the alpsp blog.
The final session had open access as topic. Chair Wim van der Stelt from Springer tried to bring some different angles to the discussion. We had people from Wiley, Royal Society and BMC talking about flipping your subscription journal, learning from starting an open access journal and having the dialogue with your customer, respectively. I liked the contribution by Jackie Jones (Wiley). She gave the Wiley criteria to flip or not to flip. Obviously parameters such as rejection rate, submission level, funder behaviour and proven open access success in the discipline are relevant in this decision. Phil Hurst (The Royal Society) mentioned that they only would be launching open access journals. He referred to the SPARC page, that summarizes why open access is a big benefit. I did not really get an answer when asking when the time would come that the default for launching a new journal would be open access, realizing that large publishers also could drive the change and influence behaviour. Well, I wrote earlier about that of course! Though we all agreed at the congress that our authors and researchers are the most important stakeholders, perhaps some other stakeholders might influence the direction publishers are taking;-)