https://biecoll2.ub.uni-bielefeld.de/index.php/or/issue/feedInternational Conference on Open Repositories : Proceedings2019-06-05T12:55:05+00:00Susanne Riedel (0521-1064058 / UHG, V1-131)publikationsdienste.ub@uni-bielefeld.deOpen Journal Systemshttps://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/50A Comparative Analysis of Institutional Repository Software2019-06-05T12:55:05+00:00Siddharth Kumar Singhojs.ub@uni-bielefeld.deMichael Wittojs.ub@uni-bielefeld.deDorothea Saloojs.ub@uni-bielefeld.deThis proposal outlines the design of a comparative analysis of the four institutional repository software packages that were represented at the 4th International Conference on Open Repositories held in 2009 in Atlanta, Georgia: EPrints, DSpace, Fedora and Zentity (The 4th International Conference on Open Repositories website, https://or09.library.gatech.edu). The study includes 23 qualitative and quantitative measures taken from default installations of the four repositories on a benchmark machine with a predefined base collection. The repositories are also being assessed on the execution of four common workflows: consume, submit, accept, and batch. A panel of external reviewers provided feedback on the design of the study and its evaluative criteria, and input is currently being solicited from the developer and user communities of each repository in order to refine the criteria, measures, data collection methods, and analyses. The aim is to produce a holistic evaluation that will describe the state of the art in repository software packages in a comparative manner, similar in approach to Consumer Reports (Consumer Reports magazine website, http://www.consumerreports.org). The output of this study will be highly useful for repository developers, repository managers, and especially those who are selecting a repository for the first time. As members of these respective communities and the organizations who support them are increasingly collaborating (e.g, DuraSpace), this study will help identify the relative strengths and weaknesses of each repository to inform the "best-of-breed" in future solutions that may be developed. The study’s methods will be presented in a transparent manner with documentation to support their reproducibility by a third party.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/52An Open-Source Digital Archiving System for Medical and Scientific Research2019-06-05T12:55:03+00:00Julien Jomierojs.ub@uni-bielefeld.deAdrien Baillyojs.ub@uni-bielefeld.deMikael Le Gallojs.ub@uni-bielefeld.deRicardo Avilaojs.ub@uni-bielefeld.deIn this paper, we present MIDAS, an open-source web-based digital archiving system that handles large collections of scientific data. We created a web-based digital archiving repository based on open standards. The MIDAS repository is specifically tuned for medical and scientific datasets and provides a flexible data management facility, a search engine, and an online image viewer. MIDAS allows researchers to store, manage and share scientific datasets, from the convenience of a web browser or through a generic programming interface, thereby facilitating the dissemination of valuable imaging datasets to research collaborators. The system is currently deployed at several research laboratories worldwide and has demonstrated its ability to streamline the full scientific processing workflow from data acquisition to analysis and reports.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/53Archival description in OAI-ORE2019-06-05T12:55:02+00:00Deborah Kaplanojs.ub@uni-bielefeld.deAnne Sauerojs.ub@uni-bielefeld.deEliot Wilczekojs.ub@uni-bielefeld.deThis paper seeks to define a new method for representing and managing description of archival collections using OAI-ORE. This new method has two advantages. Firstly, it adapts traditional archival description methods for the contemporary reality that digital collections, unlike collections of physical materials, are not best described by physical location. Secondly, it takes advantage of the power of OAI-ORE to allow for a multitude of non-linear relationships, providing richer and more powerful access and description.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/54Author identifiers: 1) Services at arXiv and 2) ORCID and repositories2019-06-05T12:55:01+00:00Simeon Warnerojs.ub@uni-bielefeld.deI will present two separate but related topics where experience with the first provides much of my perspective with the second. Public author identifiers and services based on them were introduced in March 2009 and early work and design was reported at OR09. The original services have been running for a year now and additional facilities have been added. I will report and uptake and usage patterns, and describe the more popular services. ORCID is an exciting initiative involving both commercial and academic participants that aims to build a registry and assign identifiers to address the author ambiguity problem. I will report on the current status of this rapidly evolving project and suggest how the repository community may contribute to and benefit from it.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/57BibApp 1.0: Campus Research Gateway and Expert Finder2019-06-05T12:54:58+00:00Sarah L. Shreevesojs.ub@uni-bielefeld.deBill Ingramojs.ub@uni-bielefeld.deEric Larsonojs.ub@uni-bielefeld.deDorothea Saloojs.ub@uni-bielefeld.deTim Donohueojs.ub@uni-bielefeld.deResearch institutions, particularly universities and colleges, often face challenges in understanding the range of research, publications, and collaborations occurring within their boundaries. They often struggle to keep up with the research and collaborations happening. Faculty and researchers are sometimes at a loss to find fruitful collaborations on campus. Libraries often lack the data to truly understand the publication patterns and trends among faculty. Repository managers spend much time trying to identify publications that can go into a repository. Faculty and departmental web pages are inconsistently complete and sometimes out of date; annual reporting processes are still sometimes paper-based or are not integrated into an institution-wide workflow. Grants and contracts offices generally can only provide a view of the departments that are heavily reliant on grants. There is seldom one place where an administrator, a faculty member, a funder, a potential graduate student, a subject librarian can go to better understand the research occurring on campus. Within this environment a number of tools are in development to help fill gaps in managing, displaying, searching, and mining the publication and citation data that are byproducts of the scholarly communication process. Cornell’s VIVO (http://vivo.cornell.edu/), Harvard’s Catalyst (http://catalyst.harvard.edu/), MIT’s Citeline (http://citeline.mit.edu/), and the BibApp, the subject of this paper, from the University of Illinois at Urbana-Champaign and the University of Wisconsin at Madison (http://bibapp.org with a pilot installation at http://connections.ideals.illinois.edu/) are all examples of such tools.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/59Blurring the boundaries between an institutional repository and a research information registry: where's the join?2019-06-05T12:54:56+00:00Sally Rumseyojs.ub@uni-bielefeld.deKey motivations for provision of an institutional repository (IR) for research outputs within a higher education institution (HEI) are storage, retention, dissemination and preservation of digital research materials. Increasingly IRs are being considered as tools for research management as part of pan-institutional systems. This might include statutory reporting such as that required for the forthcoming UK REF (Research Excellence Framework). Such functionality generally requires integration with other management systems within the HEI. It is common to find that each research management system has been selected to serve a specific need within an organisational department, any broader aim being out of scope. As a result, data is held in many silos, is duplicated and can even be "locked in" to those systems. This results in problems with data sharing, as well as lacks of efficiency and consistency. Some institutions are addressing this problem by considering CRISs (Current Research Information Systems) or business intelligence systems. The need for easy deposit in the institutional repository at the University of Oxford has prompted the development of a registry and tools to support research information management. Many of the motivations behind the repository are common with those for research information management. Not only do the two areas of focus have many common aims, but there is considerable overlap of design, data, services, and stakeholder requirements. This overlap means that the boundaries between the repository and the resulting tools being implemented for publicly available research activity data are blurred. By considering these two areas together with other related digital repository services, new opportunities and efficiencies can be revealed to the benefit of all stakeholders.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/60BRIL - Capturing Experiments in the Wild2019-06-05T12:54:55+00:00Mark Hedgesojs.ub@uni-bielefeld.deShrija Rajbhandariojs.ub@uni-bielefeld.deStella Fabianeojs.ub@uni-bielefeld.deThis presentation describes a project to embed a repository system (based on Fedora) within the complex, experimental processes of a number of researchers in biophysics and structural biology. The project is capturing not just individual datasets but entire experimental workflows as complex objects, incorporating provenance information based on the Open Provenance Model, to support reproduction and validation of published results. The repository is integrated within these experimental processes, so that data capture is as far as possible automatic and invisible to the researcher. A particular challenge is that the researchers’ work takes place in local environments within the department, entirely decoupled from the repository. In meeting this challenge, the project is bridging the gap between the “wild”, ad hoc and independent environment of the researchers desktop, and the curated, sustainable, institutional environment of the repository, and in the process project crosses the boundary between several of the pairs of polar opposites identified in the call.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/65Curation Micro-Services: A Pipeline Metaphor for Repositories2019-06-05T12:54:50+00:00Stephen Abramsojs.ub@uni-bielefeld.dePatricia Cruseojs.ub@uni-bielefeld.deJohn Kunzeojs.ub@uni-bielefeld.deDavid Minorojs.ub@uni-bielefeld.deThe effective long-term curation of digital content requires expert analysis, policy setting, and decision making, and a robust technical infrastructure that can effect and enforce curation policies and implement appropriate curation activities. Since the number, size, and diversity of content under curation management will undoubtedly continue to grow over time, and the state of curation understanding and best practices relative to that content will undergo a similar constant evolution, one of the overarching design goals of a sustainable curation infrastructure is flexibility. In order to provide the necessary flexibility of deployment and configuration in the face of potentially disruptive changes in technology, institutional mission, and user expectation, a useful design metaphor is provided by the Unix pipeline, in which complex behavior is an emergent property of the coordinated action of a number of simple independent components. The decomposition of repository function into a highly granular and orthogonal set of independent but interoperable micro-services is consistent with the principles of prudent engineering practice. Since each micro-service is small and self-contained, they are individually more robust and collectively easier to implement and maintain. By being freely interoperable in various strategic combinations, any number of micro-services-based repositories can be easily constructed to meet specific administrative or technical needs. Importantly, since these repositories are purposefully built from policy neutral and protocol and platform independent components to provide the function minimally necessary for a specific context, they are not constrained to conform to an infrastructural monoculture of prepackaged repository solutions. The University of California Curation Center has developed an open source micro-services infrastructure that is being used to manage the diverse digital collections of the ten campus University system and a number of non-university content partners. This paper provides a review of the conceptual design and technical implementation of this micro-services environment, a case study of initial deployment, and a look at ongoing micro-services developments.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/68Diversity and Interoperability of Repositories in a Grid Curation Environment2019-06-05T12:54:19+00:00Jens Ludwigojs.ub@uni-bielefeld.deHarry Enkeojs.ub@uni-bielefeld.deThomas Fischerojs.ub@uni-bielefeld.deAndreas Aschenbrennerojs.ub@uni-bielefeld.deRepository-based environments are increasingly important in research. While grid technologies and its relatives used to draw most attention, the e-Infrastructure community is now often looking to the repository and preservation communities to learn from their experiences. After all, trustworthy data-management and concepts to foster the agenda for data-intensive research (Data-Intensive Research: how should we improve our ability to use data. e-Science Theme, March 2010. http://www.nesc.ac.uk/esi/events/1047/) are among the key requirements of researchers from a great variety of disciplines. The WissGrid project (WissGrid - Grid for the Sciences, a D-Grid project. Funded by the German Federal Ministry of Education and Research (BMBF). www.wissgrid.de) aims to provide cross-disciplinary data curation tools for a grid environment by adapting repository concepts and technologies to the existing D-Grid e-Infrastructure. To achieve this, it combines existing systems including Fedora, iRODS, DCache, JHove, and others. WissGrid respects diversity of systems, and aims to improve interoperability of the interfaces between those systems.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/69DOARC - Distributed Open Access Reference Citations Service2019-06-05T12:54:18+00:00Michael Mauneojs.ub@uni-bielefeld.deEberhard R. Hilfojs.ub@uni-bielefeld.deDOARC (www.isn-oldenburg.de/projects/doarc2/ with its demonstator doarc.projects.isn-oldenburg.de) Distributed Open Access Reference Citations, is a new service under development to be served by DINI as part of its emerging OA-Network System (www.dini.de and www.dini.de/projekte/oa-netzwerk) and funded by the DFG (German Science Foundation DFG, www.dfg.de) which aims at creating an interactive reference index for scientific documents. Special emphasis is given to the Open Access (OA) documents posted by the present German OAI-PMH-Institutional Repositories at Universities and large Research Institutions. One part of it will be a citation-based user interface with tools for authors and readers. The general motivation behind DOARC is to serve add-on services with regard to citations and specically exploit and make use of the opportunities that the OA document world offers by its access to the full text documents. This will provide an extra benefit for both, authors and readers and thus boost the way to spread OA and thus in the end add to increase the rate of citations in an OA world. Specically, by DOARC authors will be given a tool, to ensure that they cite correctly, and that their document's references list will be extracted and added to the pool of DOARC citations. Readers will get a tool by which they can find a document relevant for them by browsing through citations and by a graphical tool which shows the 'content affinity' to other documents in the widely distributed pool of scientific OA-papers. We will exchange our checked metadata with other citation services and further the know-how for non-commercial citation services. In the interface the user will be able to see references with additional information of high value (enriched metadata). We are integrating the services into a wider European context by joining a new initiative organized by Alma Swan of Key Perspectives.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/75DuraCloud Pilot Program: utilizing cloud infrastructure as an extension of your repository2019-06-05T12:54:12+00:00Michele Kimptonojs.ub@uni-bielefeld.deCloud compute and cloud storage are now available from several commercial providers around the globe as a commodity service. Due to the extremely low cost, ease of use, and instant scalability the Academic community is taking a hard look at how to best utilize this resource as an extension of its own IT environment. Even though there has been much interest in using cloud infrastructure few organizations have seriously integrated the cloud as part of their environment due to several challenges foreseen, whether real of perceived. In a large survey undertaken by the DuraSpace organization in Winter 2010, some of the biggest challenges identified by our community were security, performance, reliability and trust. Little data has been published to either validate or discredit the key challenges noted within the academic community, although much anecdotal information exists regarding site outages, poor performance, loss of data and the like. The purpose of this presentation is to present the findings of a large scale pilot program utilizing cloud infrastructure from multiple commercial cloud providers, as a utility. The presentation will discuss the key challenges and benefits identified when using cloud storage and compute as a utility during the pilot program. The presentation will provide detailed analysis, where possible, across multiple cloud providers. The analysis presented will include, when applicable, what solutions were deployed to overcome security, reliability, performance and other identified technical issues.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/77DuraSpace's Solution Communities: Marshalling the Resources for Open-source Development2019-06-05T12:54:10+00:00Thornton Staplesojs.ub@uni-bielefeld.deValorie Hollisterojs.ub@uni-bielefeld.deIn his opening plenary address to the Open Repositories Conference, James Hilton made the statement that "open-source software is free like a puppy." This statement succinctly summarizes the need for institutions that benefit from freely available software to get involved in its ongoing development, that an investment of resources is always required. Anyone who has worked with information technology in libraries, museums and archives knows that compared with the total cost of buying vendor software and making it actually do the desired job, there can be a great deal of room to save money while making a significant investment of resources in the process.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/79Enhancing Statistics: Google Analytics and Visualization APIs2019-06-05T12:54:08+00:00Graham Triggsojs.ub@uni-bielefeld.deUsage statistics have been an important topic in the repository community for some time. From Minho's DSpace additions, through Interoperable Repository Statistics, to @mire's Solr based contribution do DSpace 1.6, there have been many approaches to providing statistics. One technique that has been used in a few places is to set up a Google Analytics account. These have several advantages - free, independent of repository (and it's architecture), proven scalability, excellent tools for visualizing the data. But it has historically had its problems too - doesn't understand the structure of the repository (for displaying totals or top views/downloads for an arbitrary grouping of the content), doesn't track downloads without additional work (or those directly linked from search engines), and the reports are locked behind an authentication wall and can't be opened up to general repository users. With the [April 2009] release of an API to retrieve data from Google Analytics, that has changed. Data that has been calculated in Google Analytics can be pulled back into the repository, so that it can be viewed within context, and by anyone that can access the repository (or not, depending on implementation). This presentation shows how Google Analytics can be integrated with the repository, techniques for capturing data that wouldn't normally be available with Analytics, and making the data comprehensible through visualizations. Whilst the implementation presented here was initially conceived using a DSpace repository, the techniques can be replicated in any repository software. Further, the visualization methods are independent of the analytics data themselves, so can be adapted for other sources of data.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/82From Dynamic to Static: the challenge of depositing, archiving and publishing constantly changing content from the information environment2019-06-05T12:54:05+00:00Richard Jonesojs.ub@uni-bielefeld.deA repository used for storing and disseminating research is a canonical example of a system which strives to produce stable artefacts which can be reliably referenced, if not actually reserved, over time. This is a difficult task, since the normal state of information is constant flux: being updated, revised, rewritten, removed and republished. Recent work in deposit technology has tended to centre around the use of a repository as a 'final resting place' for some research item. It has typically used packages of content, roughly analogous to the SIP (Submission Information Package) in OAIS (http://public.ccsds.org/publications/archive/650x0b1.pdf), to insert 'finished' works into the archive. An example of this is SWORD (http://www.swordapp.org/), which addresses in great detail the deposit mechanism, but is largely reliant on the payload being a single file (for example, a zip), containing all the information that the repository needs in order to create an archival object. This places a burden on the depositor to make an assertion that an item is finished and ready for archiving, and pushes tasks that the repository is traditionally good at (i.e. storing content) out to whatever system the user is creating their work in. Over the past year, Symplectic Ltd (http://www.symplectic.co.uk/) has attempted to break down this reliance on the "package", and move repository deposit in the direction of not only full CRUD (Create, Retrieve, Update, Delete), but also to give repository workflows the opportunity to define when a work is "finished" (at least, provisionally). This will give the repository the opportunity to do what it does best (i.e. store content), and to allow the administrators - experts in repositories and archiving - to have a hand in determining whether an item is "finished", relieving these burdens from the depositor and their research process.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/83From research data repositories to virtual research environments: a case study from the Humanities2019-06-05T12:54:04+00:00Mark Hedgesojs.ub@uni-bielefeld.deTobias Blankeojs.ub@uni-bielefeld.deMike Priddyojs.ub@uni-bielefeld.deFabio Simeoniojs.ub@uni-bielefeld.deLeonardo Candelaojs.ub@uni-bielefeld.deThe difference in scholarly practices between the sciences and the mainstream humanities is highlighted in a study (Palmer et al., 2009), which investigated the types of information source materials used in different humanities disciplines, based on results contained in the US Research Libraries Group (RLG) reports. Structured data is relatively little used, except in some areas of historical research, and data as it is traditionally understood in the sciences, i.e. the results of measurements and the lowest level of abstraction for the generation of scientific knowledge, even less so. It is true that the study is partly outdated, containing results from the early 1990s, and that data in the traditional sense is becoming increasingly important in the humanities, particularly for disciplines such as linguistics and archaeology in which scientific techniques have been widely adopted. Nevertheless, it is clear that in general humanities research relies not on measurements as a source of authority, but rather on the provenance of sources and assessment by peers, and that what data repositories are for the sciences, archives are for the humanities. [...]2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/85Hydra: A Technical and Community Framework For Customized, Reusable, Repository Solutions2019-06-05T12:54:02+00:00Tom Cramerojs.ub@uni-bielefeld.deLynn McRaeojs.ub@uni-bielefeld.deWilly Meneojs.ub@uni-bielefeld.deBess Sadlerojs.ub@uni-bielefeld.deChris Awreojs.ub@uni-bielefeld.deRichard Greenojs.ub@uni-bielefeld.deTim Sigmonojs.ub@uni-bielefeld.deThornton Staplesojs.ub@uni-bielefeld.deWhile repositories provide obvious benefits in hosting and managing content, it is equally clear that there is no "one size fits all" solution to the range of digital asset management needs at a typical institution, much less across institutions. A system that supports the submission, approval and dissemination of electronic theses and dissertations, for example, has demonstrably different requirements than a digitization workflow solution, an e-science data repository, or media preservation and access system. There is a clear need in the repository community to readily develop and deploy content-, domain-, and institution-specific solutions that integrate the flexibility and richness of customized applications and workflows with the underlying power of repositories for content management, access and preservation. Hydra is a multi-institutional, multi-functional, multi-purpose framework that addresses this need on twin fronts. As a technical framework, it provides a toolkit of reusable components that can be combined and configured in different arrays to meet a diversity of content management needs. As a community framework, Hydra provides like-minded institutions with the mechanism to combine their individual development efforts, resources and priorities into a collective solution with breadth and depth that exceeds the capacity of any single institution to create, maintain or enhance on its own.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/88INSPIRE: A new information system for High-Energy Physics. Lessons learnt2019-06-05T12:53:59+00:00Salvatore Meleojs.ub@uni-bielefeld.deE-Science brings opportunities and challenges for the world of scholarly communication: it amplifies the needs of scientists for fast, effective, unrestricted communication of ideas and scientific results, through Open Access; it enables automation of librarianship intelligence, providing new services to the scientific community for the discovery of information; it calls on libraries and information professionals to fill new roles, as evolving actors in the scholarly communication chain. The field of High-Energy Physics (HEP) has pioneered infrastructures for scholarly communication, with half a century of tradition in Open Access and pre-print dissemination and two decades of experiences in repositories. Scholarly communication in HEP is now moving fast in the e-Science era. This contribution will analyze the status of scholarly communication in the field and the potential offered by the inception of INSPIRE, the next-generation repository for the field.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/89Institutional Repositories, Long Term Preservation and the changing nature of Scholarly Publications2019-06-05T12:53:58+00:00Paul Doorenboschojs.ub@uni-bielefeld.deBarbara Siermanojs.ub@uni-bielefeld.deIn Europe over 2.5 million publications of universities and research institutions are stored in institutional repositories. Although institutional repositories make these publications accessible over time, a repository does not have the task to preserve the content for the long term. Some countries have developed an infrastructure dedicated to sustainability. The Netherlands is one of those countries. The Dutch situation could be regarded as a successful example of how long term preservation of scholarly publications is organised through an open access environment. In this contribution to the Open Repository Conference 2010 it will be explained how this infrastructure is structured, and some preservation issues related to it will be discussed. This contribution is based on the long term preservation studies into Enhanced Publications, performed in the FP7 project DRIVER II (2007-2009, Digital Repository Infrastructure Vision for European Research II, WP 4 Technology Watch Report, part 2, Long-term Preservation Technologies (Deliverable 4.3/Milestone 4.2). http://www.driver-repository.eu/. The official report is downloadable at: http://research.kb.nl/DRIVERII/resources/DRIVER_II_D4_3-M2_demonstrator_LTP__final_1_0_.pdf ; the public version is part of Enhanced Publications : Linking Publications and Research Data in Digital Repositories, by Saskia Woutersen-Windhouwer et al. Amsterdam, AUP, 2009, p. 157-209; downloadable as: http://dare.uva.nl/aup/nl/record/316849). The overall conclusion of the DRIVER studies about long term preservation is that the issues are rather of an organisational nature than of a technical one.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/94Interactive Multi-Submission Deposit Workflows for Desktop Applications2019-06-05T12:51:41+00:00David Tarrantojs.ub@uni-bielefeld.deLes Carrojs.ub@uni-bielefeld.deAlex D. Wadeojs.ub@uni-bielefeld.deSimeon Warnerojs.ub@uni-bielefeld.deOnline submission and publishing is the norm for academic researchers. With the pressure on these authors to submit their work to conferences, journals and Institutional Repositories, this leads to demands on the author to go through multiple web based interfaces, filling in forms with the same information multiple times before they can submit. At the same time, each of these services in turn will have made policy decisions on what types of format they allow and what templates the content has to conform to. The amount of work expected of the author does not adding up to the potential gain, thus most authors will only submit into the repository or publication where they foresee the most benefit. In this paper we propose a solution to this problem that embeds the workflow for multiple submissions into the desktop application of the author, most commonly Microsoft Word. We also propose extending the work done on the Microsoft Word Author Add-in tool to allow two-way negotiation between each repository and the desktop application.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/95Interoperability for digital repositories: towards a policy and quality framework2019-06-05T12:51:40+00:00Giuseppina Vulloojs.ub@uni-bielefeld.dePerla Innocentiojs.ub@uni-bielefeld.deSeamus Rossojs.ub@uni-bielefeld.deInteroperability is a property referring to the ability of diverse systems and organisations to work together. Today interoperability is considered a key-step to move from isolated digital repositories towards a common information space that allow users to browse through different resources within a single integrated environment. In this conference we describe the multi-level challenges that digital repositories face towards policy and quality interoperability, presenting the approaches and the interim outcomes of the Policy and Quality Working Groups within the EU-funded project DL.org (http://www.dlorg.eu/).2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/97Invenio: A Modern Digital Library System2019-06-05T12:51:37+00:00Samuele Kaplunojs.ub@uni-bielefeld.deInvenio is an integrated digital library system originally developed at CERN to run the CERN Document Server, currently one of the largest institutional repositories worldwide. It was started over 15 years ago and has been matured through many release cycles. Invenio is a GPL2 Open Source project based on an Apache/WSGI+Python+MySQL architecture. Its modular design enables it to serve a wide variety of usages, from a multimedia digital object repository, to a web journal, to a fully functional digital library. The development strategy used to implement Invenio ensures it is flexible in any layer. Being based on open standards such as MARCXML and OAI-PMH 2.0 its interoperability with other digital libraries is guaranteed. Being originally designed to cope with the CERN requirements for digital object management, Invenio is suitable for middle-to-large scale digital repositories (100K~10M records). Records can be of any nature (e.g. papers, books, photos, videos). This presentation will introduce the different features of Invenio, their usage in the CERN context and how other institutions and projects are also driving some of its development.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/104NARCIS: research information services on a national scale2019-06-05T12:51:29+00:00Arnoud Jippesojs.ub@uni-bielefeld.deWilko Steinhoffojs.ub@uni-bielefeld.deElly Dijkojs.ub@uni-bielefeld.deAs a national aggregator, NARCIS contains the scientific output of 27 institutional OAI-PMH repositories (IRs), with publications and descriptions of research data (datasets) from the Dutch universities, the Academy (KNAW), the Netherlands Organisation for Scientific Research (NWO), the institute for Data Archiving and Networked Services (DANS, http://www.dans.knaw.nl) and other research institutes. NARCIS also contains information from the Current Research Information Systems (CRISs) in the Netherlands on research, researchers (expertise) and research organisations. The data from the IRs and the CRISs in NARCIS are interlinked by identifiers such as the Digital Author Identifier (DAI), a unique identifier assigned to each researcher in the Netherlands. The NARCIS Suite (National Academic Research and Collaborations Information System: http://www.narcis.nl) consists of three main products: the NARCIS Portal (HTTP), the NARCIS Index (SRU) and the NARCIS Repository (OAI-PMH). The NARCIS Portal makes the combined data searchable and available to the public at a national level. Meeting the requirements of modern information systems requires continual development and a good understanding of NARCIS portal visitors and their needs.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/105On Constructing Repository Infrastructures: The D-NET Software Toolkit2019-06-05T12:51:28+00:00Paolo Manghiojs.ub@uni-bielefeld.deMarko Mikulicicojs.ub@uni-bielefeld.deKaterina Iatropoulouojs.ub@uni-bielefeld.deAntonis Lebesisojs.ub@uni-bielefeld.deNatalia Manolaojs.ub@uni-bielefeld.deDue to the wide diffusion of digital repositories, organizations responsible for large research communities, such as national or project consortia, research institutions, foundations, are increasingly tempted into setting up so-called repository infrastructure systems (e.g., OAIster (http://www.oaister.org), BASE (http://www.base-search.net), DAREnet-NARCIS (http://www.narcis.info)). Such systems offer web portals, services and APIs for cross-operating over the metadata records of publications (lately also of experimental data and compound objects) aggregated from a set of repositories. Generally, they consist of two connected tiers: an aggregation system for populating an information space of metadata records by harvesting and transforming (e.g., cleaning, enriching) records from a set of OAI-PMH compatible data sources, typically repositories; and a web portal, providing end-users with advanced functionality over such information space (search, browsing, annotations, recommendations, collections, user profiling, etc). Typically, information spaces also offer access to third-party applications through standard APIs (e.g., OAI-PMH, SRW, OAI-ORE). Repository infrastructure systems address similar architectural and functional issues across several disciplines and application domains. On the one hand they all deal, with more or less contingent complexity, with the generic problem of harvesting metadata records of a given format, transform them into records of a target format and deliver web portals to operate over these records. On the other hand, they have to cope with arbitrary numbers of repositories, hence administering them, from automatic scheduling of harvesting and transformation actions, definition of relative transformation mappings, to the inherent scalability problems of coping with ever growing incoming records. Existing solutions tend to privilege customization of software, neglecting general-purpose approaches. Typically, for example, aggregation systems are designed to generate metadata records of a format X from records of format Y, and not be parametric with respect to such formats. Similarly, the participation of a repository to an infrastructure is driven by firm policies and administrators often do not have the freedom of specifying their own workflow, by combining as they prefer logical steps such as harvesting, storing, transforming, indexing and validating. In summary, repository infrastructure systems typically provide advanced and effective solutions tailored to the one scenario of interest, while can hardly be applicable to different scenarios, where similar but distinct requirements apply. As a consequence, an organization willing to set up a repository infrastructure system with peculiar requirements has to face the "expensive" problem of designing and developing a new software from scratch. In this paper, we present a general-purpose and cost-efficient solution for the construction of customized repository infrastructures, based on the D-NET Software Toolkit (www.d-net.research-infrastructures.eu), developed in the context of the DRIVER and DRIVER-II projects (http://www.driver-community.eu). D-NET offers a service-oriented framework, whose services can be combined by developers to easily construct customized aggregation systems and personalized web portals. D-NET services can be customized, extended and combined to match domain specific scenarios, while distribution, sharing and orchestration of services enables the construction of scalable and robust repository infrastructures. As we shall describe in the following, D-NET is currently the enabling software of a number of European projects and national initiatives.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/106Open Access Network, Heading for Joint Standards and Enhancing Cooperation. Value-Added Services for German Open Access Repositories2019-06-05T12:51:27+00:00Stefan Buddenbohmojs.ub@uni-bielefeld.deMaxi Kindlingojs.ub@uni-bielefeld.deOA Network collaborates with other associated German Open Access-related projects and pursues the overarching aim to increase the visibility and the ease of use of the German research output. For this end a technical infrastructure is established to offer value-added services based on a shared information space across all participating repositories. In addition to this OA-Network promotes the DINI-certificate for Open Access repositories (standardization) and a regularly communication exchange in the German repository landscape.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/107Open Access Statistics: An Examination how to generate Interoperable Usage Information from Distributed Open Access Services Publishing2019-06-05T12:51:25+00:00Ulrich Herbojs.ub@uni-bielefeld.deDaniel Metjeojs.ub@uni-bielefeld.dePublishing and bibliometric indicators are of utmost relevance for scientists and research institutions. The impact or importance of a publication (or even of a scientist or an institution) is mostly regarded to be equivalent to a citation-based indicator, e.g. in form of the Journal Impact Factor (JIF) or the Hirsch-Index (h-index). Both on an individual and an institutional level performance measurement depends strongly on these impact scores. The most common methods to assess the impact of scientific publications show several deficiencies, for instance: · The scope of the databases that are used to calculate citation-based metrics (Web of Science WoS respectively the Journal Citation Reports JCR and Scopus) is restricted and more or less arbitrarily defined. · The JIF and h-index are showing several disciplinary biases (exclusion of many document types, the two years timeframe of the JIF, etc.). · Both JIF and h-index are privileging documents in English language. Even though in principle citation-based metrics provide some arguments pro open access, they mostly disadvantage open access publications - and by that reduce the attractiveness of open access for scientists. Especially documents that are self-archived on open access repositories (and not published in an open access journal) are excluded from the relevant databases that are typically used to calculate JIF-scores or the h-index.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/108ORBi in orbit, a user-oriented IR for multiple wins: why scholars take a real part in the success story...2019-06-05T12:51:24+00:00Paul Thirionojs.ub@uni-bielefeld.deFrançois Renavilleojs.ub@uni-bielefeld.deMyriam Bastinojs.ub@uni-bielefeld.deDominique Chalonoojs.ub@uni-bielefeld.deThe University of Liège's institutional repository, ORBi (Open Repository and Bibliography, http://orbi.ulg.ac.be), was officially launched in November 2008. Barely fourteen months later (February 2010), it already contained more than 30,000 bibliographic references with more than 20,000 full texts available. In other words, this represents a growth of more than about 65 new references a day, which is appreciable for a medium-sized university (17,000 students, 2,700 scholars, about 3,500 new publications/year). According to ROAR (http://roar.eprints.org), ORBi is the second institutional repository (for a total of 930) in high activity level (i.e. number of days with more than 100 archived references a day). Furthermore, all these records were archived by the Institution authors themselves, there was neither batch archiving nor mass validation. What are the reasons that may explain such a success?2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/110PIRUS2: Creating a Common Standard for Measuring Online Usage of Individual Articles2019-06-05T12:51:22+00:00Peter Shepherdojs.ub@uni-bielefeld.dePaul Needhamojs.ub@uni-bielefeld.deThis presentation will provide an overview of the PIRUS2 project and will cover the project's background, its main objectives, the planned deliverables, and the benefits to the main stakeholder groups involved in scholarly information, including repositories. A progress report on the project will also be provided.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/112Preserving repository content: practical steps for repository managers2019-06-05T12:51:20+00:00Miggie Picktonojs.ub@uni-bielefeld.deSteve Hitchcockojs.ub@uni-bielefeld.deSimon Colesojs.ub@uni-bielefeld.deDebra Morrisojs.ub@uni-bielefeld.deStephanie Meeceojs.ub@uni-bielefeld.deFew people would disagree that preservation of repository content is important. Indeed, the stated aim of most repositories is to provide permanent open access to the material therein. Why, then, have so few repositories implemented practical action plans for long term preservation of their content? There could be several reasons. Although a number of preservation tools and services already exist, until now few have addressed the specific needs of repositories; in practical terms they have necessitated action that is additional rather than integral to repository workflow. Repository content is typically highly varied and complex, while descriptive metadata and file formats are used inconsistently and deposited by those without knowledge or expertise in managing digital assets. Busy repository managers with little, if any, experience in digital preservation have lacked time and confidence to tackle what is perceived as an important but complex and scary problem. The JISC-funded KeepIt project is bringing together existing preservation tools and services with appropriate training and advice on preservation strategy, policy, costs, metadata, storage, format management and trust to enable the participating repository managers to formulate practical and achievable preservation plans. From the point of view of the repository manager, this presentation summarises the activities of the KeepIt project, describes the impact that the project has had on the participating repositories, and suggests steps that other repository managers might take to ensure preservation readiness.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/114Promoting Open Access to librarians and researchers by the international information platform open-access.net2019-06-05T12:51:17+00:00Anja Oberländerojs.ub@uni-bielefeld.deKarlheinz Pappenbergerojs.ub@uni-bielefeld.deOpen access has become an important publication form but the concept behind is not as well known as the public discussion makes us believe sometimes. Still today open access is equalized with electronic publication and often mixed with offers like google books. Researchers feel unsure when faced with open access and as a consequence often react conservatively. A German Research Foundation (DFG) funded project (2006-2010) attempts to structure and describe the concept of and the discussion about open access. With the libraries of the Universities of Bielefeld, Goettingen und Konstanz and the Institute of Qualitative Research in Berlin, four German experts in the area of open access took the initiative to create a now well known information platform www.open-access.net.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/116PubMan - one Repository with multiple Usage und Re-Use Possibilities2019-06-05T12:51:15+00:00Juliane Müllerojs.ub@uni-bielefeld.dePubMan is an application which allows members of research organizations to store, manage and enrich their publications. The app is based on the eSciDoc infrastructure, a joint project run by the Max Planck Digital Library (MPDL) and the Fachinformationszentrum (FIZ) Karlsruhe. Presenting scholarly work in the World Wide Web has become an important and common procedure for research communities seeking to enhance the visibility of their research results as well as to initiate scientific collaboration and information exchange. In response to that trend much emphasis has been put on the possibility of providing multiple re-use options for metadata, full texts and supplementary material during the conception and development of PubMan. Our repository software facilitates the integration of user-defined publication lists in local websites as well as in personal and topic-centered WordPress blogs. The paper will depict these two reuse possibilities with examples of operational usage scenarios after giving an overview of the basic concepts and functionalities of PubMan.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/118Ready or not here it comes: Australian institutional research repository data readiness surveys 20102019-06-05T12:50:39+00:00Caroline Druryojs.ub@uni-bielefeld.dePeter Seftonojs.ub@uni-bielefeld.deKate Watsonojs.ub@uni-bielefeld.deAustralian institutional research repositories are now facing a new challenge: datasets and associated metadata. With prior focus predominantly on research outputs, repository managers are now involved in a new phase of repository re-purposing - curation of datasets and associated metadata, and provision of this metadata to a national data commons through ANDS (Australian National Data Service). Through a series of surveys conducted by the national repository support service, CAIRSS (the CAUL Australian Institutional Repository Support Service), this paper examines the research data challenges facing research repository managers, levels of institutional research data identification, and the readiness of traditional institutional research repositories to either curate or work alongside this data.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/124Repository sustainability: arXiv business model experience and implications2019-06-05T12:50:31+00:00Simeon Warnerojs.ub@uni-bielefeld.deOya Riegerojs.ub@uni-bielefeld.deIn January 2010 Cornell University Library moved to expand the funding base for arXiv by requesting support from user institutions. We hope that this voluntary support model will engage the institutions that benefit most from arXiv while maintaining arXiv's open access mission as a service free to readers and submitters alike. The development of a business model has made us look closely at arXiv's sustainability from both operational and technical standpoints. The engagement of supporting institutions creates new requirements to demonstrate value to these institutions as separate from arXiv's understood value to the community in general. In this presentation we will briefly describe options considered in development of the business model, the model chosen, uptake and feedback. We will then focus on the implications for arXiv's operation, for the long term development of our platform, and new reporting facilities.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/125Research Data Management in the Lab2019-06-05T12:50:30+00:00Matthias Razumojs.ub@uni-bielefeld.deSimon Einwächterojs.ub@uni-bielefeld.deRozita Fridmanojs.ub@uni-bielefeld.deMarkus Herrmannojs.ub@uni-bielefeld.deMichael Krügerojs.ub@uni-bielefeld.deNorman Pohlojs.ub@uni-bielefeld.deFrank Schwichtenbergojs.ub@uni-bielefeld.deKlaus Zimmermannojs.ub@uni-bielefeld.deResearch, especially in science, is increasingly data-driven (Hey & Trefethen, 2003). The obvious type of research data is raw data produced by experiments (by means of sensors and other lab equipment). However, other types of data are highly relevant as well: calibration and configuration settings, analyzed and aggregated data, data generated by simulations. Today, nearly all of this data is born-digital. Based on the recommendations for "good scientific practice", researchers are required to keep their data for a long time. In Germany, DFG demands 8-10 years for published results (Deutsche Forschungsgemeinschaft, 1998). Ideally, data should not only be kept and made accessible upon request, but be published as well - either as part of the publication proper, or as references to data sets stored in dedicated data repositories. Another emerging trend are data publication journals, e.g. the Earth System Science Data Journal (http://www.earth-system-science-data.net/). In contrast to these high-level requirements, many research institutes still lack a well-established and structured data management. Extremely data-intense disciplines like high-energy physics or climate research have built powerful grid infrastructures, which they provide to their respective communities. But for most "small sciences", such complex and highly specialized compute and storage infrastructures are missing and may not even be adequate. Consequently, the burden of setting up a data management infrastructure and of establishing and enforcing data curation policies lie with each institute or university. The ANDS project has shown that this approach is even preferable over a central (e.g., national or discipline-specific) data repository (The ANDS Technical Working Group, 2007). However, delegating the task of proper data curation to the head of a department or a working group adds a huge workload to their daily work. At the same time, they typically have little training and experience in data acquisition and cataloging. The library has expertise in cataloging and describing textual publications with metadata, but typically lacks the disciplinespecific knowledge needed to assess the data objects in their semantic meaning and importance. Trying to link raw data with calibration and configuration data at the end of a project is challenging or impossible, even for dedicated "data curators" and researchers themselves. Consequently, researchers focus on their (mostly textual) publications and have no established procedures on how to cope with data objects after the end of a project or a publication (Helly, Staudigel, & Koppers, 2003). This dilemma can be resolved by acquiring and storing the data automatically at the earliest convenience, i.e. during the course of an experiment. Only at this point in time, all the contextual information is available, which can be used to generate additional metadata. Deploying a data infrastructure to store and maintain the data in a generic way helps to enforce organization-wide data curation policies. Here, repository systems like Fedora (http://www.fedora-commons.org/) (Lagoze, Payette, Shin, & Wilper, 2005) or eSciDoc (https://www.escidoc.org/) (Dreyer, Bulatovic, Tschida, & Razum, 2007) come into play. However, an organization-wide data management has only a limited added-value for the researcher in the lab. Thus, the data acquisition should take place in a non-invasive manner, so that it doesn't interfere with the established work processes of researchers and thus poses a minimal threshold to the scientist.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/127Researcher Name Resolver: A framework for researcher identification in Japan2019-06-05T12:50:28+00:00Kei Kurakawaojs.ub@uni-bielefeld.deHideaki Takedaojs.ub@uni-bielefeld.deMasao Takakuojs.ub@uni-bielefeld.deAkiko Aizawaojs.ub@uni-bielefeld.deInstitutional repositories with the aim of open access are gradually spreading in academia, and more and more research articles and academic books are being archived on the web. In particular, researchers are accessing more and more electronic articles, papers, and books on the web. This paper describes an information service that firstly provides researcher name authority on the web, and secondly gathers the web locations of academic information resources and organizes them for individual researchers (especially researchers working in Japan).2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/131Taking the plunge. Repositories and Research pooling in Scotland2019-06-05T12:50:24+00:00James Toonojs.ub@uni-bielefeld.deDefining the Problem: The ERIS Project is working with stakeholders to understand what will motivate them to deposit their research outputs and integrate repository use into a standard part of their research life cycle. A specific focus of the project is on the needs of Research pools. Research Pooling is defined as the formation of strategic collaborations between universities in disciplinary or multi-disciplinary areas involving the international quality departments or individual researchers across Scotland (http://www.sfc.ac.uk/research/researchpools/researchpools.aspx). The emergence of research pooling since the 2001 RAE exercise (http://www.sfc.ac.uk/web/FILES/Our_Priorities_Research/research_pooling_article_july08.pdf) has made a significant contribution to the development and success of Scottish research (http://www.timeshighereducation.co.uk/story.asp?storyCode=404806§ioncode=26), and as institutions digest the outcome of the 2008 RAE and plan for the Research Excellent Framework (REF), the pools are considering how they can best manage their strategic approaches and meet the growing return on investment and other reporting demands of their investing partners.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/132Terminology Services in a Digital Repository2019-06-05T12:50:23+00:00Michael Durbinojs.ub@uni-bielefeld.deThe uses of controlled vocabularies in digital library applications can be expanded with ease when thesauri are made available using a standard service oriented architecture. Adopting this approach, the Indiana University Digital Library Program has been able to easily adapt existing tools to use controlled vocabularies and to better take advantage of a wide array of controlled vocabulary sources.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/134The UCAR Open Access Mandate: A Community-Centered Model of Action2019-06-05T12:50:21+00:00Mary Marlinoojs.ub@uni-bielefeld.deJamaica Jonesojs.ub@uni-bielefeld.deKaron Kellyojs.ub@uni-bielefeld.deMike Wrightojs.ub@uni-bielefeld.deIn its role of managing the US federally-funded National Center for Atmospheric Research (NCAR), the non-profit University Corporation for Atmospheric Research (UCAR) has a strong history of supporting and promoting the atmospheric sciences and related fields. In September 2009, UCAR joined a growing number of other institutions worldwide in passing an Open Access mandate requiring that all peerreviewed research published in scientific journals by its scientists and staff be made publicly available online through its institutional repository, OpenSky. The new policy and accompanying repository will enable UCAR to compile, preserve and share a complete record of its intellectual output; increase its community visibility and impact; and advance research in the atmospheric sciences by providing free, worldwide access to UCAR and NCAR scholarship. The passage of the UCAR Open Access policy was especially noteworthy as it marked the first instance of a National Science Foundation-funded national laboratory to mandate Open Access. Also noteworthy was the broad community-driven process that the NCAR Library, as the leader in this initiative, employed. This presentation will outline the three-phase process adopted by the Library in its effort to reflect both institutional and disciplinary community values and needs through OpenSky services and policies.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/135Tools for Dataset Lifecycle Management2019-06-05T12:50:20+00:00Alex D. Wadeojs.ub@uni-bielefeld.deDean Guoojs.ub@uni-bielefeld.deSimon Mercerojs.ub@uni-bielefeld.deOscar Naimojs.ub@uni-bielefeld.deMichael Zyskowskiojs.ub@uni-bielefeld.deWith a growing demand for transparency and openness around scientific research and an emphasis on the sharing of scientific workflows and datasets, there is a similarly increasing number in the variety of client and web-based tools required to manage each stage in the lifecycle of individual datasets. Datasets are produced from a variety of instruments and computations; are analyzed and manipulated; are stored and referenced within the context of a research project; and, ideally, are archived, stored, and shared with the rest of the world. Each of these efforts, however, requires a number of user actions involving a growing number of systems and interfaces. In an effort to preserve the flexibility and autonomy of the researchers, but also to minimize the logistical effort involved, we present in this paper a partial solution approach to this problem through the integration of workflow execution, project collaboration, project-based dataset management and versioning, and long-term archiving and dissemination. This example demonstrates the orchestration of a number of existing Microsoft Research projects; however, the interaction between each uses existing web interoperability protocols and can easily support the replacement of individual architectural components with related services.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/142Worlds Collide: A Repository Based on Technical and Archival Collaboration2019-06-05T12:50:12+00:00Erin OMearaojs.ub@uni-bielefeld.deGregory Jansenojs.ub@uni-bielefeld.deThe failure of many institutional repositories (IR) to acquire large sets of faculty publications has shown that the traditional IR model is not sustainable without a shift in academic publishing. The Carolina Digital Repository (CDR) aims to be more than a traditional IR and instead of focusing primarily on open access publishing, it will acquire, preserve and make accessible a range of at-risk scholarly output, such as datasets, faculty papers, university records and other faculty research projects.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/51A service-oriented national e-theses information system and repository2019-06-05T12:55:04+00:00Nikos Houssosojs.ub@uni-bielefeld.dePanagiotis Stathopoulosojs.ub@uni-bielefeld.deIoanna Sarantopoulouojs.ub@uni-bielefeld.deDimitris Zavaliadisojs.ub@uni-bielefeld.deEvi Sachiniojs.ub@uni-bielefeld.deIntroduction In this article we present an overview of the information technology infrastructure that supports the operation of the Greek National Archive of Doctoral Theses (HEDI). The infrastructure, which has been recently re-developed replacing a legacy system, makes use of repository software, in particular the DSpace platform, as part of a service-oriented information system based on open source components.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/55Authority framework in 1.6 and CILEA's customization for the Hong Kong University2019-06-05T12:55:00+00:00Andrea Bolliniojs.ub@uni-bielefeld.deAllen Lamojs.ub@uni-bielefeld.deSusanna Mornatiojs.ub@uni-bielefeld.deDavid T. Palmerojs.ub@uni-bielefeld.deUniversities and researcher centers are rethinking their communication strategies, highlighting the quality of their research output and the profiles of their best researchers. Listing publications from an Expert Finder system may represent a solution. But providing an Expert Finder system within an IR is a more innovative approach. This idea was developed by the University of Hong Kong Libraries and applied to their IR, HKU Scholars Hub at http://hub.hku.hk/, powered by DSpace. This presentation shows how the HKU requirements were implemented by CILEA in the context of the ResearcherPage@HKU project. Using the new authority control framework by Larry Stone, introduced in DSpace 1.6.0, an Expert Finder system can be nicely integrated with DSpace but kept technically separated. Its components can evolve separately and are easier and cheaper to maintain. The author (Bollini) has contributed in porting the authority control framework, originally implemented for the XMLUI, to the JSPUI, and extending its architecture to support browse and search variants.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/56Batch Metadata Editing - Dspace 1.6: a workshop/tutorial to inform and build skills2019-06-05T12:54:59+00:00Leonie Hayesojs.ub@uni-bielefeld.deStuart Lewisojs.ub@uni-bielefeld.deVanessa Newton-Wadeojs.ub@uni-bielefeld.deA new feature of the DSpace 1.6 Software is "Batch Metadata Editing". It gives Repository staff the ability to export metadata and change it easily for re-upload into the system. Once you try this "Data Entry" will never be the same.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/58Blacklight: Leveraging Next Generation Discovery2019-06-05T12:54:57+00:00Tom Cramerojs.ub@uni-bielefeld.deBlacklight is an open source, next generation discovery application. Originally developed to serve as an overarching "discovery layer" for libraries, its design and engineering give it the necessary feature set and flexibility to also serve as a repository interface, capable of fronting content of any kind, local or remote. With a rich set of search, browse and view functions, Blacklight's look, features and behaviors can be readily configured to meet local needs "out of the box". As an application with a modular architecture, it provides a framework capable of supporting additional libraries and widgets that extend Blacklight's capabilities beyond resource discovery. And as a vibrant open source project integrating enhancements and development from more than a dozen institutions, Blacklight is becoming a proven platform for content discovery and access, agnostic of underlying systems or repositories. This presentation will demonstrate the broad-based utility of Blacklight, including its key features, its use in different contexts, and how it integrates with different repositories to provide a rich and ready-made discovery application.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/62Building flexible workflows with Fedora, the University of York approach2019-06-05T12:54:53+00:00Julie Allinsonojs.ub@uni-bielefeld.deYankui Fengojs.ub@uni-bielefeld.deIn 2008, the University of York embarked on a project to build a multimedia Digital Library underpinned by Fedora Commons. In the long-term, the York Digital Library (YODL) plans to meet not only multimedia requirements, but multi-disciplinary, institutional, multi-user and multiple access control needs. In order to do this, we needed a flexible, scalable approach to fulfil the following three strands of our roadmap: * An ‘administrative’ workflow, including metadata creation forms, automatic extraction of metadata and data/resource transformation for images, video, music, audio and text resources to be extensible as new resource types are identified. * A self-deposit workflow for non-administrative users to deposit to YODL, White Rose Research Online (WRRO) and other targets as appropriate. * Bulk ingest tools and procedures, to include a desktop deposit tool. This paper will outline current and future work at York which builds on Fedora Commons, initially drawing on the Muradora interface and access control layer with a SWORD-enabled simple deposit tool in development and future plans for making this more flexible with Mura-independent applications.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/64Content Model Driven Software2019-06-05T12:54:51+00:00Kåre Fiedler Christiansenojs.ub@uni-bielefeld.deAsger Askov Blekingeojs.ub@uni-bielefeld.deDigital collections often have very different properties. Fedora Commons is flexible enough to contain collections with varying structures, file formats and metadata formats. However, that flexibility makes it difficult to work with the data, since very little is known about the data's properties. We present a way to use machine-readable, detailed content models, called Enhanced Content Models, that allows software to adapt automatically to specific collections.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/66Custom Rich Client, Multimedia Interfaces for the Web and Mobile for Fedora and Duracloud Using Adobe Open Source Solutions2019-06-05T12:54:49+00:00Greg Hamerojs.ub@uni-bielefeld.deAdobe supports several open source projects for creating custom rich client, multimedia interfaces for both the web and now mobile devices. This session will focus on using Fedora and Duracloud's web service and REST APIs in conjunction with the following open source frameworks and servers supported by Adobe: -- Flex SDK http://opensource.adobe.com/wiki/display/flexsdk -- Open Source Media Framework (aka OSMF) http://opensource.adobe.com/wiki/display/osmf -- BlazeDS http://opensource.adobe.com/wiki/display/blazeds2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/67Developing publishing process support system with Fedora and Orbeon Forms - A case study2019-06-05T12:54:48+00:00Matti Varankaojs.ub@uni-bielefeld.deVille Varjonenojs.ub@uni-bielefeld.deTapio Ryhänenojs.ub@uni-bielefeld.deIn University of Oulu, almost ready dissertation theses are processed to follow local ACTA templates and conventions. This process can take many iterations between dissertants, series editors and the print. Also, a lot of data about the publication and creators should be gathered in order to create cover pages, abstracts, and other informational pieces related to the publication. Since this kind of process is hard to manage via e-mail, some kind of supporting software for this publication process is necessary. This article (and presentation) describes a work-in-progress case study of this publishing process support system development.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/70DSpace 1.6 usage statistics: What it can do for you?2019-06-05T12:54:17+00:00Ben Bosmanojs.ub@uni-bielefeld.deIntroduction DSpace 1.6 has been extended with a new Apache Solr based statistics solution. This contribution to DSpace is the open-source version of @mire's commercial "Content and Usage Analysis" DSpace module. The DSpace 1.6 statistics offer storage of usage data including bitstream downloads, item display page visits, collection and community homepage visits, ... .2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/71DSpace Discovery: Unifying DSpace Search and Browse with Solr2019-06-05T12:54:16+00:00Mark Diggoryojs.ub@uni-bielefeld.deOne key innovation long awaited by the DSpace community is a more intuitive and unified search and browse experience. NESCent and @mire NV have collaborated to create a new Faceted Search and Browse experience for NESCent's DSpace repository, Dryad. DSpace Discovery is a modular Add-on for DSpace XMLUI that replaces DSpace search and browse with Solr. The implementation of Discovery's Services utilize the DSpace Services API originally developed for DSpace 2.0 and back-ported for use within the recent release of DSpace 1.6.0. Thus, DSpace Discovery represents the next stage in @mire's DSpace 2.0 development initiative.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/72DSpace Helps Irish National Learning Repository To Change Its Focus2019-06-05T12:54:15+00:00Catherine Bruenojs.ub@uni-bielefeld.deGavin Henricojs.ub@uni-bielefeld.deBob Strunzojs.ub@uni-bielefeld.deThis paper describes how the Irish National Digital Learning Resource Repository (NDLR) has implemented a DSpace-based platform which enables it to more effectively utilise its limited resources to serve customer need. The implementation of a DSpace-based solution in partnership with two private-sector service providers has permitted a refocusing of the available resources from software licensing to research and development of the platform.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/73DSpace Under the Hood: How DSpace works2019-06-05T12:54:14+00:00Stuart Lewisojs.ub@uni-bielefeld.deLeonie Hayesojs.ub@uni-bielefeld.deElin Stangelandojs.ub@uni-bielefeld.deKim Shepherdojs.ub@uni-bielefeld.deRichard Jonesojs.ub@uni-bielefeld.deMonica Roosojs.ub@uni-bielefeld.deWhilst you don't need to be a mechanic to drive a car, it is helpful if you have a basic understanding of how a car works, what bits do different jobs, and how to top up your oil and pump up your tyres / tires. This presentation will give an overview of the DSpace architecture, and will give you enough knowledge to understand how DSpace works. By knowing this, you will also learn about ways DSpace could be used, and ways in which it can't be used.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/74DSpace Under the Hood: The development process and YOUR role in it2019-06-05T12:54:13+00:00Stuart Lewisojs.ub@uni-bielefeld.deLeonie Hayesojs.ub@uni-bielefeld.deElin Stangelandojs.ub@uni-bielefeld.deKim Shepherdojs.ub@uni-bielefeld.deRichard Jonesojs.ub@uni-bielefeld.deMonica Roosojs.ub@uni-bielefeld.deDSpace development in undertaken by the DSpace community. No one, or no organisation is in charge, and without contributions from the DSpace community the platform would not continue to develop and evolve. Sometimes it can appear that there are people in charge, or that unless you are a technical developer then there is no way or need to contribute. This presentation will explain how DSpace development usually takes place, where and who has input at different stages, and will equip you to contribute further, or to help you contribute for the first time.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/76DuraSpace strategic overview2019-06-05T12:54:11+00:00Sandy Payetteojs.ub@uni-bielefeld.deBradley McLeanojs.ub@uni-bielefeld.deThornton Staplesojs.ub@uni-bielefeld.deTim Donohueojs.ub@uni-bielefeld.deMichele Kimptonojs.ub@uni-bielefeld.de2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/78Enhanced Content Models2019-06-05T12:54:09+00:00Asger Askov Blekingeojs.ub@uni-bielefeld.deKåre Fiedler Christiansenojs.ub@uni-bielefeld.deWith the release of Fedora Commons 3.0, the Content Model Architecture (CMA) was added to Fedora. It was not meant as an end-all solution, but rather as a starting point for building more advanced content models. The Fedora team expected the user community to figure out the best ways to use and amend this design. Now, the CMA have been around for a while, and certain improvements have, by agreement of the Fedora Committer Team, matured enough to be brought back into the core Fedora system, probably with the coming Fedora 3.4 release. This proposal aims to present these improvements to the Fedora community.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/84Harvest: A Digital Object Search and Discovery System for Distributed Collections with Different File Types and Structures2019-06-05T12:54:03+00:00Frances Webbojs.ub@uni-bielefeld.deJoy Paulsonojs.ub@uni-bielefeld.deThe Harvest site, http://harvest.mannlib.cornell.edu is implemented using Fedora for data management, SOLR/Lucene for search, and Drupal for the user interface. Its goals are to provide an integrated search interface in which differences in format, structure and location are disguised in favor of treating objects that are conceptually alike as like, parallel objects. This is done by building Fedora content models that keep track of the complexity while providing services normalized to the objects' conceptual types; Lucene search documents that are fully normalized to hide implementation differences; and a Drupal front end that can treat all of the objects as generic objects until and unless specialized front-end services are built.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/87Improving DSpace Backups, Restores, Migrations2019-06-05T12:54:00+00:00Tim Donohueojs.ub@uni-bielefeld.deIn the past, backing up your DSpace contents has involved semi-synchronized backups of both your database and your files. Although this procedure generally works fine, it can prove problematic when you suddenly need to restore the contents of a single Community, Collection or Item (both metadata and files). There is also the problem of metadata and content files residing in separate backups - if either one of these backups becomes corrupted, it is nearly impossible to completely restore your DSpace contents. This talk will introduce a new DSpace feature being developed as part of the DuraCloud integration project. This new feature will allow you to export your entire DSpace Community / Collection hierarchy (including all Items, and their metadata and files) into a series of METS-based packages. These METS-based packages may be used to restore all of your DSpace contents (into another DSpace), or just the contents of a single Community, Collection or Item. These packages can also provide a more stable way to backup your DSpace contents, or an additional means of getting content in or out of DSpace. This work is based on a DSpace plugin built as part of the MIT PLEDGE project; however it has been updated to allow for a complete restore of your DSpace hierarchy.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/90Integrating DSpace in a Multinational Context: Challenges and Reflections2019-06-05T12:53:58+00:00Kyle Strandojs.ub@uni-bielefeld.deBackground Sharing and accessing knowledge is possible in ways that were not available even in the recent past. These methods make the exchange of ideas, experiences, and knowledge efficient, relatively inexpensive, and simple. A number of international standards have been developed and agreed upon by academic institutions, research enterprises, governments, and others which increase the ease of knowledge exchange through interoperability of systems and consistency in the presentation of expected features and data. With an internet connection and a few clicks of a computer's mouse, a vast array of knowledge is instantly accessible. The Inter-American Development Bank (IDB), established to accelerate economic and social development in Latin America and the Caribbean, is committed to ensuring its knowledge products are accessible and visible for Bank employees, constituents of borrowing and non-borrowing member countries, partners and other practitioners in the region and the public at large. Due to the nature of the IDB, access to and the sharing of knowledge are critical for the success of its development mission. It is of the utmost importance to the institution, its mission, and its role in the region that the knowledge produced by the Bank be easily accessible and visible for all in the development community and beyond.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/92Integrating research output in UPC repositories2019-06-05T12:53:56+00:00Antonio Juan Prieto-Jimenezojs.ub@uni-bielefeld.deYolanda Cacho-Figuerasojs.ub@uni-bielefeld.deRuth Iñigo-Roblesojs.ub@uni-bielefeld.deAnna Rovira-Fernandezojs.ub@uni-bielefeld.deJordi Serrano-Muñozojs.ub@uni-bielefeld.deIntroduction In 2007, the Universitat Politecnica de Catalunya (UPC) started a strategical Project for the University called DRAC (Descriptor de la Recerca i l'Activitat Academica or Academic Activity and Research Descriptor): the main goal of which was the development of a new information system for managing, evaluating and publishing the research output. The Library participated in the project from the beginning. The other two partners were: OTRDI (the office manager of the research output) and UPCnet as the technological partner. DRAC, the new software, has the following applications: the main one is that academics can develop a curriculum following the national standard for presentation to national and regional administrations. It is also the tool that allows research groups to publish their output on the Internet.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/93Integrating true multilingual capabilities into an Institutional Repository : Building the World Health Organization's Institutional Repository for Information Sharing2019-06-05T12:51:42+00:00Michael Guthrieojs.ub@uni-bielefeld.deCristiane de Oliveiraojs.ub@uni-bielefeld.dePhilippe Veltsosojs.ub@uni-bielefeld.deYousef Elbesojs.ub@uni-bielefeld.deIan Robertsojs.ub@uni-bielefeld.deDorothy Leonorojs.ub@uni-bielefeld.deGraham Triggsojs.ub@uni-bielefeld.deHayden Youngojs.ub@uni-bielefeld.deIntroduction In a global context, how do we facilitate the dissemination and access if the material in a repository is primarily searchable and retrievable in only in one or two languages? It has been observed that there is much research and public health guidelines that goes unknown to large numbers of researchers, health workers and to the general public when they are only able to access in one language or another. How do we promote integration of various information sources in an international organization with 147 country offices, six regional offices and one headquarters, and with material being published in 6 official languages and 53 non-official languages? Research ethics should start considering, at design stage, the outreach of methods used and results obtained beyond the boundaries of the research language. Access to information in as many languages as possible should become a major component of any accessibilityrelated debate.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/96Interoperability Issues Between Learning Object Repositories and Metadata Harvesters2019-06-05T12:51:38+00:00Ricard de la Vegaojs.ub@uni-bielefeld.deJordi Conesaojs.ub@uni-bielefeld.deJulia Minguillonojs.ub@uni-bielefeld.deIn this paper we describe an open learning object repository on Statistics based on DSpace which contains true learning objects, that is, exercises, equations, data sets, etc. This repository is part of a large project intended to promote the use of learning object repositories as part of the learning process in virtual learning environments. This involves the creation of a new user interface that provides users with additional services such as resource rating, commenting and so. Both aspects make traditional metadata schemes such as Dublin Core to be inadequate, as there are resources with no title or author, for instance, as those fields are not used by learners to browse and search for learning resources in the repository. Therefore, exporting OAI-PMH compliant records using OAI-DC is not possible, thus limiting the visibility of the learning objects in the repository outside the institution. We propose an architecture based on ontologies and the use of extended metadata records for both storing and refactoring such descriptions.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/98JorumOpen - customising DSpace for a national repository of Open Educational Resources2019-06-05T12:51:36+00:00Ryan Hargreavesojs.ub@uni-bielefeld.deGareth Wallerojs.ub@uni-bielefeld.deChristine Reesojs.ub@uni-bielefeld.deLaura Shawojs.ub@uni-bielefeld.deJorum, a JISC-funded service in development begun in 2002, has been committed to collecting and sharing learning and teaching materials within the UK Further and Higher Education community. With the growing interest in and increase in "open" content, Jorum released a new option in January 2010 - JorumOpen, which provides a focus to find nationally hosted learning materials developed by the UK Further and Higher Education sector. JorumOpen allows any user, from any country, free and unrestricted access to learning materials licensed under a Creative Commons licence. The central component of JorumOpen is an open source digital repository based on a modified version of DSpace 1.5.2.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/100Maintaining Standards - managing metadata consistency across collections and tools2019-06-05T12:51:34+00:00Michael Durbinojs.ub@uni-bielefeld.deThe flexibility to store any type of metadata in a fedora repository is one of its greatest strengths, but it externalizes the burden of metadata standardization and validation. This burden is exacerbated by the fact that during the course of a digital object’s life, its metadata may be modified by several different applications or utilities operated by any number of users. This talk will cover strategies to deal with the task of maintaining consistency and the creation of and adherence to institutional-specific policies or standards for metadata quality.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/102Move On Up: TCD TARA and the Value of Dspace 1.62019-06-05T12:51:31+00:00Niamh Brennanojs.ub@uni-bielefeld.deGavin Henrickojs.ub@uni-bielefeld.deBefore In 2006 Enovation Solutions with Trinity College Dublin developed TARA - Trinity's Access to Research Archive. TARA was based on DSpace customised with specific enhancements for TCD and integrated with the university's CERIF-compliant CRIS, the TCD Research Support System (RSS). The version of Dspace deployed was 1.3.2. Over the years, much further integration, customisation and configuration of complex workflows, metadata fields and web services were implemented as TARA became an integral part of the university's fully-integrated research environment. However, it was realised that the version of Dspace was starting to creak, and that an upgrade was needed to move the repository to a new level of capacity, functionality and responsiveness to the needs of the research community. In 2009 TCD approached Enovation Solutions to investigate and plan the upgrade of TARA from DSpace v1.3.2.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/113Primo discovery and delivery of Fedora content2019-06-05T12:51:19+00:00Maude Francesojs.ub@uni-bielefeld.deStefania Riccardiojs.ub@uni-bielefeld.deCarmel Carlsenojs.ub@uni-bielefeld.deAngela Dawsonojs.ub@uni-bielefeld.deThe paper describes and demonstrates the use of Primo as the discovery layer for a Fedora repository. Primo is an Ex Libris product designed to be a one-stop solution for discovery and delivery of resources from various sources. Fedora/Primo systems have been deployed on two UNSW eResearch projects, based on requirements of research groups in public health and social sciences. Planning has commenced for implementation of Primo on existing Fedora/VITAL systems, including MemRE (Membranes Research Environment). With the general release of Primo 3 in April 2010, VITAL will be replaced as the search and discovery layer of the institutional repository also. The presentation demonstrates KnowlHEG, an electronic gateway for Human Resources for Health (HRH) material relating to Asia and the Pacific region, which was jointly developed by the University Library and the School of Public Health and Community Medicine (SPHCM) at UNSW. Primo provides the user interface, search functionality and persistent URLs on a Fedora repository.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/115Public Policy Research in PolicyArchive: Use of DSpace When Metadata Meets Open Access2019-06-05T12:51:16+00:00Sarah Buchananojs.ub@uni-bielefeld.deLegislation at the state and national level is shaped through the conduct of research on the existing conditions, infrastructure, and community practice of specific areas of social inquiry. Public policy research, both within the academy and in government, serves a vital role in preparing legislators and policymakers to enact well-informed measures that are grounded in regional studies. Such work is becoming increasingly urgent as social issues and phenomena become more layered, complex, and interdisciplinary, and require more nuanced investigations. The results of this research, while essential at the time of need, have not historically been collected in a systematic manner that would enable access to the data beyond the immediate time period. A clear need has been expressed by researchers in many contexts who seek access to the rich content produced by local organizations, publishers, specialized institutes, and independent bodies of experts.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/117Re-thinking Fedora's storage layer: A new high-level interface to remove old assumptions and allow novel use cases2019-06-05T12:51:14+00:00Aaron Birklandojs.ub@uni-bielefeld.deAsger Askov Blekingeojs.ub@uni-bielefeld.deTraditionally, the pluggable storage interface in Fedora has followed a "low-level" paradigm where objects and datastreams are presented to the storage layer as independent, anonymous blobs of data. This arrangement has proven simple, reliable, and generally flexible. In the past few years however, there has been an increasing need for Fedora to mediate storage in more complex scenarios. Managing large numbers of big datastreams, multiplexing storage between different devices or cloud storage, and archiving content in a transparent manner are tasks that are difficult to achieve through Fedora currently.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/128Solrizer: Pragmatically Connecting Search, Management and Indexing in a Repository Solution2019-06-05T12:50:27+00:00Matt Zumwaltojs.ub@uni-bielefeld.deAny repository solution provides facilities for Creation, Management, and Editing of Content as well as facilities for Searching and Browsing through that content. Experience has shown that when a solution binds these two areas of functionality together too tightly, the system becomes brittle and unworkable, discouraging innovation. Our work on the Hydra project has produced a flexible and intuitive solution that combines these two areas in an almost entirely decoupled fashion. This solution, which is already working in multiple Hydra applications, is built on a three-part pattern where Blacklight handles Search and Discovery, ActiveFedora handles Creation, Management and Editing of Content, and a small application called Shelver supplies the crossover point by indexing the content into Solr so that it will show up in Blacklight. This three-part approach reflects a strong pattern for designing and/or improving repository solutions. The main pivot of this approach is to treat indexing as its own separate part of the application and to allow that indexing processes to evolve constantly as part of the application development cycle. This work is the product of combining established best practices, best of breed software, and lessons learned from an iterative approach to application development. While our implementation is focused on Fedora Repositories, the software could be used in multiple contexts and the pattern is certainly applicable to any content-oriented application.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/130Symplectic Repository Tools: Deposit Lifecycle with Interchangeable Repositories2019-06-05T12:50:25+00:00Richard Jonesojs.ub@uni-bielefeld.deThis paper presents the development of software at Symplectic Ltd to provide a full CRUD (Create, Retrieve, Update, Delete) interface for a number of digital repositories using common, open standards: Symplectic Repository Tools [http://www.symplectic.co.uk/products/repositorytools.html]. The primary objective of this work has been to integrate a research management system (Symplectic Elements [http://www.symplectic.co.uk/products/publications.html]) with both DSpace [http://www.dspace.org/] and Fedora [http://www.fedoracommons.org] (specifically, for the University of Oxford)1, such that the academic's experience of managing their repository content is simple, straightforward and in no way diverting from the overall process of managing their research outputs. The consequences of this include increased uptake of the repository and higher throughput of fulltext content.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/136Towards Interoperable Preservation Repositories (TIPR)2019-06-05T12:50:18+00:00Priscilla Caplanojs.ub@uni-bielefeld.deWilliam Kehoeojs.ub@uni-bielefeld.deJoseph Pawletkoojs.ub@uni-bielefeld.deThe TIPR Project, Towards Interoperable Preservation Repositories, was begun in October 2008, its participants being the Florida Center for Library Automation, the Bobst Library at New York University, and Cornell University Library. Our goal has been to develop, test, and promote a standard format for exchanging information packages among dissimilar preservation repositories - an intermediary information package that all repositories can read and write, overcoming the mismatch between repository types.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/137Using DSpace as a Closed Research Repository2019-06-05T12:50:17+00:00Ianthe Hindojs.ub@uni-bielefeld.deRobin Taylorojs.ub@uni-bielefeld.deIntroduction This year the University of Edinburgh introduced a dual DSpace (http://www.dspace.org/) repository architecture: A closed repository to hold all research outputs (no full-text required and password protected) and an open access repository (full-text only). This presentation will focus on the closed repository. In our repository we have our university hierarchy: colleges and schools as the communities and the academic staff at the collection level. This is not the conventional way of setting up a DSpace repository and immediately allows an author to be associated with a collection. We used the item based submission from the 2009 Google Summer of Code Submission Enhancements (http://www.fedora-commons.org/confluence/display/DSPACE/Google+Summer+of+Code+2009+Submission+Enhancements). This has allowed us to only display relevant metadata based on publication type during submission. We have added functionality for an academic (on the collection level) to export a list of their publications to their own personal webpage by inserting some JavaScript (or as XML). This dynamically fetches the list of their publications from the repository at each load, with the most recent publications and orderings as defined by the academic reflected in the list. The same can be done by a research administrator (on the community level) to export a list of publications for their school or research group. We have developed the ability to export items across to our open repository using SWORD - open access items can be copied across on deposit and those under embargo can be copied across once the embargo period has passed.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/138Using DSpace for Publishing Electronic Theses and Dissertations2019-06-05T12:50:16+00:00James Hallidayojs.ub@uni-bielefeld.deRandall Floydojs.ub@uni-bielefeld.deThe IUScholarWorks Repository is a DSpace-based institutional repository for the dissemination and preservation of Indiana University's scholarly output. Some time ago, our team made a decision to incorporate electronic theses and dissertations (ETD's) into our DSpace repository, and this created several technical challenges for us. Getting ETD's into DSpace is a challenge that a number of institutions have tackled recently, and several innovative solutions have been found, such as Vireo, the ETD submission management tool from the Texas Digital Library. However, we were faced with a number of requirements in our ETD workflow that had not yet been encountered by other institutions, and required some interesting solutions. In this proposal, we will provide an outline for a presentation that will discuss these challenges, and the solutions that were envisioned. We will also provide an update on our current progress towards implementing our plans, and discuss the future work that is left to be done.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/61Building a DDC Annotated Corpus from OAI Metadata2019-06-05T12:54:54+00:00Mathias Löschojs.ub@uni-bielefeld.deUlli Waltingerojs.ub@uni-bielefeld.deWolfram Horstmannojs.ub@uni-bielefeld.deAlexander Mehlerojs.ub@uni-bielefeld.deA frequently overlooked benefit of open access publications is that they are an easy accessible and cost-effective data source for research disciplines like text mining, natural language processing or computational linguistics. In those fields, linguistic data is usually managed in the form of corpora, i.e. machine readable bodies of texts that represent a particular variety of language.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/81ESCAPE: A generic tool for enhanced scientific communication2019-06-05T12:54:06+00:00Maarten van Bentumojs.ub@uni-bielefeld.deDennis Vierkantojs.ub@uni-bielefeld.deJan M. Guttelingojs.ub@uni-bielefeld.deGeneral scope In order to enhance communication of research results as part of a network of actors in a particular field one wants to · relate relevant objects (documents, persons, institutions, projects, ... on the basis of content and describe/annotate these relations · communicate and present these aggregated objects for various target groups, not only scientists but also policy makers, journalists, companies, and the general public · enhance this communication by commenting and tagging related objects The tool ESCAPE is a tool in which users can aggregate digital objects stored at any location and describe, annotate, comment and tag the relations between these objects. The system not only allows formal relations (like bibliographic metadata) but especially "content relations" concerning topics, reviews, comments, discussions, applications, etc.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/86Implementing citation management and report generation value-added services over OAI-PMH compliant repositories2019-06-05T12:54:01+00:00Nikos Houssosojs.ub@uni-bielefeld.deChristina Paschouojs.ub@uni-bielefeld.deIoanna-Ourania Stathopoulouojs.ub@uni-bielefeld.deKonstantinos Stamatisojs.ub@uni-bielefeld.deDespina Hardouveliojs.ub@uni-bielefeld.deThe National Documentation Center (EKT) has developed HELIOS (http://helios-eie.ekt.gr) - the institutional repository of the National Hellenic Research Foundation (NHRF) aiming at collecting the scientific work of its associate researchers. DSpace has been used as the repository platform in the implementation of HELIOS. According to the repository literature (A DRIVER’s Guide to European Repositories, Amsterdam University Press, 2008), offering value-added services to researchers can be an important factor for repository take-up, able to significantly increase deposits through self-archiving. Therefore, in order to encourage the usage of HELIOS among the NHRF researchers, an application providing value-added services over the repository has been developed. In brief, this application harvests the digital repository's data and presents them outside the repository's framework, enabling citation management and configurable custom reporting, for example producing publication lists per researcher and institute, exactly in the format applied in the institute annual report. The application is in operation on top of the HELIOS DSpace-based repository; however it has been designed and implemented to depend only on information retrieved via OAI-PMH, so that it can work with any OAI-PMH compliant repository platform (DSpace, Eprints, Fedora, etc.)2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/109Pedocs and kopal: Co-operation of subject repository and long-term archiving2019-06-05T12:51:23+00:00Julia Kreuschojs.ub@uni-bielefeld.deThe Information Center for Education of the German Institute for International Educational Research (DIPF) in Frankfurt on Main offers information and advice services for all areas of education and educational science. These are online portals, full text and bibliographic databases, information systems and participative Web 2.0 applications. Since 2008, the Information Center for Education has operated its subject repository "pedocs" which focuses on digital publications in the field of educational science, pedagogical practice and the history of education. For the time being this repository is run as a project in parallel with a second project, "Long term preservation - pedagogics" aiming to preserve texts acquired, recorded and stored in the "pedocs" repository on a long term basis. The synchrony of both projects offers an opportunity to prepare the digital objects in pedocs from the outset in a way that makes them suitable for subsequent transfer into a long-term archive. While the repository is solely managed by the Information Center for Education, the archiving will be operated in co-operation with the German National Library (DNB). The poster will outline the workflows which have been developed and established for acquisition of open-access publications on the one hand and for long term preservation of the text objects on the other hand. It will illustrate which aspects (i.e legal, technical, organisational) had to be considered to prepare the preserving objects for transfer into the long term archive. The co-operation and resource sharing of the DIPF and DNB taking place in the framework of a superordinate project using the archiving system "kopal", a joint-venture development of the DNB and the Göttingen State and University Library, will be presented.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/63Complete Preservation with EPrints2019-06-05T12:54:52+00:00David Tarrantojs.ub@uni-bielefeld.deSteve Hitchcockojs.ub@uni-bielefeld.de2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/80EPrints Funding Data and Workflow2019-06-05T12:54:07+00:00William J. Nixonojs.ub@uni-bielefeld.deLesley Drysdaleojs.ub@uni-bielefeld.deThis short paper provides details of the addition of new fields for funder and award data and the creation of a new Funding option in the deposit workflow.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/99Kultivating Kultur2019-06-05T12:51:35+00:00Carlos Silvaojs.ub@uni-bielefeld.deStephanie Meeceojs.ub@uni-bielefeld.deInstitutional Repositories (IRs) within the UK have traditionally focused upon text based research and have had a low uptake within the creative arts. The Kultur Project, funded by the Joint Information Systems Committee (JISC) for the period 2007-2009, was a highly successful collaboration between University of Southampton, including Winchester School of Art, University of the Arts London, University for the Creative Arts and the Visual Arts Data Service (VADS). Using EPrints software the project investigated how best to store, share and promote research in the creative arts in a way that could function as a multimedia showcase for digital versions of creative works as well documenting performances, exhibitions and installations.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/101MePrints: Building User Centred Repositories2019-06-05T12:51:32+00:00D. Millardojs.ub@uni-bielefeld.deH. Davisojs.ub@uni-bielefeld.deYvonne Howardojs.ub@uni-bielefeld.deS. Francoisojs.ub@uni-bielefeld.dePatrick McSweeneyojs.ub@uni-bielefeld.deDebra Morrisojs.ub@uni-bielefeld.deM. Ramsdenojs.ub@uni-bielefeld.deS. Whiteojs.ub@uni-bielefeld.deOver the last few years we have been working to reinvent Teaching and Learning Repositories learning from the best practices of Web 2.0. Over this time we have successfully deployed a number of innovative repositories, including Southampton University EdShare, The Language Box, The HumBox, Open University’s LORO and Worcester Learning Box.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/103Moving Southampton ePrints to Oracle2019-06-05T12:51:30+00:00Wendy Whiteojs.ub@uni-bielefeld.de2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/111Portal Workflow Integration: Cardiff University I-WIRE Project2019-06-05T12:51:21+00:00Scott Hillojs.ub@uni-bielefeld.deThe I-WIRE project is designing and developing an enhanced deposit workflow that will be presented to Cardiff University's researchers via our Modern Working Environment portal. Presenting the workflow in this way gives us opportunities to integrate the deposit process and data with other research-related process and systems. We are also exploring DOI deposit, and a Web of Science import, within the same portal, all of which is aimed at making it easier for academics to deposit their publications.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/119Reinventing Teaching and Learning Repositories2019-06-05T12:50:37+00:00Yvonne Howardojs.ub@uni-bielefeld.deThrough the Faroes and OneShare projects the EdShare team have developed an innovative approach to teaching and learning repositories, learning from the best practices of Web 2.0 and re-imagining these repositories from the ground up as living sites, whether for a community or an institution. Many existing teaching and learning repositories base themselves closely on the research repository model, but research repositories are about Archiving, and ordinary practitioners rarely want to archive their teaching materials.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/120Report on the Development of Breadcrumbs Navigation Feature in EPrints2019-06-05T12:50:36+00:00Tomasz Neugebauerojs.ub@uni-bielefeld.de2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/121Report on the Development of ETD-MS Export Plugin in EPrints2019-06-05T12:50:35+00:00Tomasz Neugebauerojs.ub@uni-bielefeld.de2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/123Repository Software as a Platform for the Registry of Open Access Repositories2019-06-05T12:50:33+00:00T. Brodyojs.ub@uni-bielefeld.deLeslie Carrojs.ub@uni-bielefeld.deS. Harnadojs.ub@uni-bielefeld.deWe have migrated the ROAR service to a repository software-based platform. The goal of this project was to reduce the administrative overhead for us and improve the experience for users by enabling them to control and update their own records.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/126Research reporting using Eprints at The University of Northampton2019-06-05T12:50:29+00:00Miggie Picktonojs.ub@uni-bielefeld.deEach year The University of Northampton research administrators produce an "Annual Research Report" for each of the university's six Schools. Before 2007, and in the absence of any centralised research database, research details were simply collated and word-processed into one-off documents. NECTAR provided the opportunity to store bibliographic details in a systematic manner and the potential to re-use these data for research reporting.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/129Symplectic Integration2019-06-05T12:50:26+00:00Richard Jonesojs.ub@uni-bielefeld.deThis paper presents the development of software at Symplectic Ltd to provide a full CRUD (Create, Retrieve, Update, Delete) interface to EPrints using common, open standards: Symplectic Repository Tools.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/133The State and Future of EPrints2019-06-05T12:50:22+00:00Leslie Carrojs.ub@uni-bielefeld.deLes Carr, the EPrints Technical Director, will give an overview of the developments of the past year, and a preview of upcoming features.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/139Using EPrints to Understand Research / The Semantics of Reading Lists2019-06-05T12:50:15+00:00Patrick McSweeneyojs.ub@uni-bielefeld.deThe department of Electronics and Computer Science (ECS) at the University of Southampton has about two hundred research staff. Often staff members will have overlapping research areas without knowing. It is useful to discover what the people around you working on and to identify those with similar research intrests so that ideas can be shared. Electronics and computer science are fields which evolve rapidly, therefore a researcher's interests change or drift. This means that a researcher' list of interests 6 months ago may be quite different to their research interests now. Rather than encouraging researchers to constantly update their own interests, this information could be derived automatically to determine interests based on their reading material.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/140Video and Mass Storage2019-06-05T12:50:14+00:00David Tarrantojs.ub@uni-bielefeld.deThere are a number of challenges in handling large amounts of data for multimedia content - the sheer size of raw HD video data and the many versions of a video that are created during production and dissemination requires a bespoke infrastructure that is different in nature to the normal browser-based solutions. This session looks at the vidEPrints customisation designed to integrate with an institution's video production environment and handle dissemination via multiple services including YouTube and iTunes U.2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/91Integrating Repositories With Research Infrastructure: The Astronomical Virtual Observatory2019-06-05T12:53:57+00:00Francoise Genovaojs.ub@uni-bielefeld.de2010-12-31T00:00:00+00:00Copyright (c) https://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/141We can Build Amazing Public Islands - but Should We?2019-06-05T12:50:13+00:00Sandy Payetteojs.ub@uni-bielefeld.de2010-12-31T00:00:00+00:00Copyright (c) 2010 Sandy Payettehttps://biecoll2.ub.uni-bielefeld.de/index.php/or/article/view/122Repositories and Linked Open Data: the view from myExperiment2019-06-05T12:50:34+00:00David De Roureojs.ub@uni-bielefeld.deWhile some repositories are focused on data, the myExperiment project has demonstrated the value in sharing the methods that are used to process that data - sharing know-how and building new capabilities through the community. Evolving usage of the website provides glimpses of the future behaviour of researchers and an exploration of what researchers might be sharing in the future instead of papers. This exploration of social sharing and ad hoc reuse has taken the project into the world of scholarly research objects, linked data and what might be described as "Linked Open Methods". We now see researchers beginning to share new methods that operate at this next level of research.2010-12-31T00:00:00+00:00Copyright (c)