Coder Social home page Coder Social logo

force11 / force11-sciwg Goto Github PK

View Code? Open in Web Editor NEW
56.0 56.0 18.0 13.46 MB

FORCE11 Software Citation Implementation Working Group

Home Page: https://www.force11.org/group/software-citation-implementation-working-group

License: BSD 3-Clause "New" or "Revised" License

force11-sciwg's People

Contributors

agbeltran avatar alee avatar arfon avatar ascl avatar daniel-mietchen avatar danielskatz avatar dbouquin avatar dcgenomics avatar hainesr avatar jonathansick avatar katrinleinweber avatar mfenner avatar moranegg avatar npch avatar sdruskat avatar stain avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

force11-sciwg's Issues

Understanding the usage of software citations

This is a catchall hack for understanding more about the usage of software citations.

Tasks might include:

  • What is being cited? Can we use CrossRef/DataCite APIs?
  • What can we find from Zenodo?
    • Can we identify which areas use software citations the most?
  • How much is software already being cited? Could we do something that allowed us to identify what software DOIs are being cited using the DataCite event APIs?

When does code sharing warrant citation?

A use case for the Software Citation hackathon at the FORCE2017:
Short version: Sharing and/or copy-pasting code is very common, so when does the line get crossed where citation is warranted, and how do we spread this best practice?

A software developer in the USGS created a web app for his local science center. A person at another science center noticed the app and asked for the source code. The author passed on the source code to the requestor in good faith and in the interest of sharing with no explicit request for credit or citation. The author later found an application being promoted by a regional director in a widespread email distribution which was essentially an exact copy of the app he had authored. The copy was being promoted and praised as the original work of the local science center. The author did a difference check between the source code of his own app and the one now being promoted. The only difference between them was the section of code dealing with the source of the data being displayed on the map. No credit was ever given to the original author and the original author was only made aware of the existence of the other application and its promotion when a colleague pointed it out to him.

A few questions (optional to share):
The original author did not explicitly request credit or provide a suggested citation, but nonetheless does he have a grievance in this case? Does this sort of use by a colleague in the same agency fall under fair and free use? Did the requester have an acknowledgement responsibility to the original author in this case?

cite.research-software.org?

Progress report (of sorts)

I've sat on the domains research-software.org and scientific-software.org for a while now (free with hosting plan) and thought I'd do something useful with them.

I've set up a sort of small, (what I hope to be) low threshold portal site for software citation, aimed at interested developers and researchers mainly, who would not be knee-deep into the topic: https://cite.research-software.org/.

Basically pushes the principles paper, CFF and CodeMeta for now, and covers the most basic cases of citing software in a paper and providing citation metadata.

Let me know if you think this is useful. If not, I'm happy to take it down again, or change it. I'd like to run this as a community project mostly for pushing the case, so pull requests are welcome at https://github.com/research-software/citation.

Not sure if it makes much sense to include the more intricate use cases, as these would more likely be discussed directly in forums such as this?

(All of cite.research-software.org, research-software.org and scientific-software.org (and their German language counterparts) 301 to research-software.org/citation. Some SSL issues still as certbot doesn't cover this particular setup bug-free.)

Do we need "real" names for authors?

This came up in a FORCE2017 session - is it important that we are able to extract/check for real names of people, rather than going on github etc. handles.

[Software Heritage] deposit / browse / download software source code features

Hi everyone,
Here is a short summary of the features we are working on at the moment:

  • A software deposit mechanism for connecting with specific platforms, in particular for research software. Will be made available through Hal-Inria platform (the open archive of Inria- The French Institute for Research in Computer Science and Automation)
  • A web application to browse the Software Heritage archive
  • A vault to download the source code bundles

@zacchiro has presented the features above at FOSDEM this year:
https://fosdem.org/2018/schedule/event/outsourcing_distribution_requirements/

I will update you here when we will open the services to the public.

Mapping committers to authors

How do we generate an author list from a list of people who committed to software source code? There is a technical aspect, e.g. fetching real names from GitHub usernames. But there is also the bigger question of who should be considered an author in the software citation sense.

Separate publisher from archive/library

Hi everyone,

I'm sorry I couldn't be present at the last call we had in March.

I was reading the notes from the March meeting and saw that in the stakeholder's list there is the Publisher stakeholder with two types of publishers:

Publisher: includes both traditional publishers that publish text and / or software papers as well as archives such as Zenodo that directly publish software.

IMHO, we need to separate publishers, where the authors are deliberately depositing software to be published and where the software is reviewed, and archives where the process is similar to libraries where the main goal is the preservation of software without review and in many cases without the intervention of the authors.

I would be happy to hear thoughts on the subject.

Cheers,

Citation File Format

In the wake of the WSSSPE5.1 discussion and speed blogging group on a standard format for CITATION files, development has started on a human- and machine-readable Citation File Format (CFF) (http://github.com/citation-file-format/citation-file-format).

I'd like to find out/discuss

  • whether this format can fit in somewhere in the SCIWG,
  • whether CFF and other products of the SCIWG can integrate (e.g., streamlined keys),
  • and whether the hackathon could be used to a) contribute to the format, and/or b) start to implement some needed infrastructure around it (libraries to read, and convert from/to other formats, webservices to help users create and validate files).

Update JOSS to generate new v2 CodeMeta metadata

JOSS is for publishing software papers, and @arfon wrote a script to automate the production of a codemeta.json file for JOSS submissions. That script uses a pre-release version of CodeMeta. It would be great to update it to use the 2.0 CodeMeta release (https://doi.org/10.5063/schema/codemeta-2.0) as an exemplar of how to ship standardized software metadata with a repository software submission to JOSS.

There is a [related issue to automate the process of automating JOSS submissions via R that may be of interest as well, and possibly a source of inspiration.

For some reason I don't seem to be authorized to label this issue as a Force11 hackathon idea. Maybe @npch can help with that.

Aligning Citation File Format and CodeMeta

Aligning Citation File Format and CodeMeta

TL;DR

The Citation File Format is a software citation metadata input format that is tailored to support credit-based use cases (1,2,15) as described in the principles paper, and enforce adoption of the principles. CodeMeta is a general exchange format for software metadata. Both can be used to provide citation metadata, and concerns have been voiced about possible re-duplication of efforts. In my opinion, however, the formats do different things.

I propose that it's fine to have both for the initial provision of citation metadata - and let the user pick her/his favourite -, and that downstream in the citation workflow existing CFF files should simply be converted to codemeta.json to leverage its advantages as a multi-purpose exchange format.

Introduction

At the recent [SCIWG meetup in Berlin (during RDA)] (https://github.com/force11/force11-sciwg/blob/master/meetings/20180322-Notes.md), we've discussed the relationship between CodeMeta and the Citation File Format (CFF). I've wanted to do this for quite some time as I felt the two were too close in at least a subset of their purposes to simply ignore their co-existence, and to make an effort to align/reconcile the formats, and their respective places in the software citation workflow.

During the meeting itself I feel I've failed to create enough understanding of the purpose of CFF and the difference between it and CodeMeta. Subsequently, I've discussed their relationship at the SSI Collaborations Workshop 2018 both within a dedicated mini-workshop and in several personal discussions.

In this issue I'll try to summarize what's been discussed as necessary, and would like for the working group to continue the discussion here to find the optimal way for the formats to be aligned with each other, in order to avoid unnecessary reduplication of efforts. I'll also make the case that both can co-exist without necessarily harming each others' progress and uptake.

Background

I guess that most members of the working group will be familiar with CodeMeta, but possibly know little about CFF, so what follows is a little background information and a brief comparison.

CFF is a YAML-based format for software citation metadata. It's been the indirect outcome of a discussion group at WSSSPE5.1, which looked at replacing free text CITATION files in plain-text with something that is machine-readable.

CFF focuses on the "simpler" citation use cases (1, 2, 15) from the principles paper. It enforces application of the principles by requiring specific keys. It provides context and is "self-descriptive" by including 1) a mandatory message key which should contain usage instructions, 2) scopes for secondary references for a software (e.g., a software paper, a paper describing an algorithm implemented in the software). It is compatible with CodeMeta in that it has a column in the crosswalk table. It is both "more generic" (@danielskatz) and more specific than CodeMeta, in that a) it doesn't specify what it can relate to, which can be more than just "a software/version/object with a DOI", i.e., packages within a project, single source code files, even specific LOC, single commits, etc.; b) it provides more fine-grained keys for, e.g., commits vs. software versions.

There are some tools for CFF available from the GitHub org (doi2cff resolver/converter, github/-lab2cff extractor, generic converter (CFF to BibTeX, CodeMeta, Endnode, RIS); Python, Ruby, Java tooling). A generator web app prototype has been created during the CW18 hack day (release forthcoming).

There has also been some uptake, particularly by the Netherlands eScience Center, where CFF is used for providing citation metadata for their software directory.

Discussion

During the SCIWG working meeting mentioned above, concerns were voiced that, for a small community such as ours, developing and maintaining two different metadata formats might be too much of a strain on resources. What I have therefore taken away from that meeting are three options for CFF to align with CodeMeta:

  1. Let CFF die
  2. Transform CFF into a CodeMeta YAML representation
  3. Achieve and maintain full compatibility

Before and during the Collaborations Workshop 2018, I have juggled pros and cons of these options and have discussed them, partly in great depth, with CW18 attendees.

Considerations including feedback from CW18

CFF as CodeMeta YAML representation

As for the above point 2 (Transform CFF into a CodeMeta YAML representation), this is something that @mfenner had suggested during the SCIWG working meeting. As YAML is a superset of JSON, it would be great if CFF could represent CodeMeta as codemeta.yaml, and be convertible via base libraries for JSON/YAML in programming languages. However, YAML is not a representation format for schema.org, and hence it's impossible to convert without loss, or - for YAML to JSON-LD conversion - without manipulation during the conversion process.

On the other hand, one of the next steps for CFF will be to create a "CFF-Meta" module (cf. discussion in this issue), which will add support for those fields in CodeMeta that aren't yet represented in CFF, hence allowing for lossless conversion* (although not simple transformation) between the two formats.

This leaves us with two options: no CFF / fully compatible CFF

Discard CFF

The simplest option, arguably, but I'd like to make a case against it for the following reasons.

CFF and CodeMeta are not the same thing and are not doing the same thing

  • While CodeMeta is an exchange format, CFF is an input/provision/"documentation" format.
    As such, one of CFF's use cases as direct successor of the free text CITATION files, is to be distributed with artifacts, similar to a README or LICENSE file.
    Additionally, CFF is self-descriptive in that it must contain a message, to be used to tell the user what to do with the provided metadata.

Some of the feedback collected during CW18 suggested that while some communities would not know where to start with a codemeta.json file, they'd be happy to write CFF files for their software. This is of course highly subjective, but as this has been mentioned quite often may stand as a valid point. So perhaps I should rephrase: CodeMeta is the better exchange format (undeniably), CFF is the better input format.

  • CodeMeta is a multi-purpose format, CFF (Core) is very much citation-centric.
    I think the strongest support for this claim is that CFF actually enforces application of the software citation principles via requiring data for the basic requirements from the principles paper, table 2 for the use cases it is meant to mainly support.
    Additionally, it supports the provision of citation metadata not only for whole software projects/versions, but also for smaller units (see above), and, e.g., single commits.
    With fine-grained key sets for, e.g., different types of repositories, CFF is attractive for corner cases such as providing citation metadata for, e.g., legacy software (as suggested by @drjwbaker).
    And, CFF supports lists of scoped secondary references, e.g., algorithm papers, etc. (see above).

  • CFF is human-centric (in terms of writability/readability), CodeMeta is - arguably - more machine-centric, by design.

  • The community does actually want CFF to exist!
    Apart from the "simplicity" feedback noted above, this claim is mostly based on personal feedback from CW18. CFF is recognized as a thing to use for providing software citation metadata, partly by virtue of its name (which has been described - not by me - as sounding "official, longstanding, authoritative"), whereas the same understanding of CodeMeta did not seem to have permeated the group of attendees at CW18. (This is obviously not a very strong point as it is a matter of publicity to change this.)

In addition to this, there has already been decent uptake, see above.

Proposal: Let CFF and CodeMeta co-exist in the primary tier of software citation

During the CW18 mini workshop, @danielskatz has provided the following comment, which I think is very much to the point:

I want to figure out how we put CFF and CodeMeta together, so we don’t have two unrelated duplicative things running around at the same time.

I'd like to make the following proposal to solve this, up for discussion.

Let both formats do what they do best, as alternative solutions, while enabling downstream conversion to CodeMeta.

In my opinion, there are no downsides to letting the user choose which format to use for the primary provision of software citation metadata. If a user feels that s/he prefers one over the other, that's perfectly fine. If I was forced to pick which one should be preferred, I'd say CFF just because (IMHO) it is more user-friendly - and thus makes the whole software citation workflow more accessible to possibly less informed individuals - and better suited specifically to the referred simpler citation use cases, but I strongly believe that this is a decision to be made by the actual user.

Also, I think it's fine that either format can inform end user-facing tools that process the provided information, such as code platforms, reference managers, or applications themselves (which may read, format and display via cite() calls or similar).

As stated, I believe it is fine to have both options at the primary stage, i.e., direct or mediated provision of software citation metadata by the initial supplier ("authors") of a software. However, as CodeMeta is clearly the exchange format of choice, the crucial factor in all this is that conversion from CFF to CodeMeta should be implemented as soon as metadata exchange is in preparation, or actually happens.

So: Users should be able to choose which format to write, or generate, initially, but should be encouraged and supported in transforming CFF to CodeMeta downstream.

This can happen via user-initiated conversion, and there's already a tool to do that. More importantly though, this should be automatable at certain steps in the development/release/share workflow, e.g., at deploy time (Maven Release Plugin, twine, etc.), CI/CD (Travis, Jenkins, etc.), the GitHub-Zenodo bridge, etc. etc. Some efforts related to this have already been made, others are underway. And I don't think that these efforts actually drain resources from the SCIWG, but instead they seem to help with onboarding further parties to software citation implementation.


* "Lossless" conversion in that all of the actual software metadata can be converted. I'm not sure whether CodeMeta supports multiple (scoped) secondary references as is, so perhaps we should discuss whether this is something that could be useful to have in CodeMeta as well.

Broader issues that need to be addressed to implement software citation

In the last working group call on 6 Feb 2018 two major themes emerged:

  • Tools and infrastructure: mostly small issues and gaps in existing workflows
  • Documentation and training, targeted to specific stakeholders.

Stakeholder communities:

  • infrastructure providers building services that facilitate software citation
  • software authors writing software
  • researchers citing software
  • publishers publishing literature with software citations

The following questions need to be addressed:

  • How do we collect and describe the gaps in tools and infrastructure? Via GitHub issues?
  • What are existing tools, services and communities in the above areas?
  • Should we form subgroups that work on addressing documentation and training for these communities?
  • How is documentation and training organized? A central website, a set of publications, a webinar series?

A data model for software on Wikidata

As briefly mentioned in today's call, more and more software is being indexed in Wikidata, and the data model for that is evolving.

Here is a SPARQL query that gives the Wikidata properties that are most used on items that are instances (or instances of subclasses) of software:

SELECT DISTINCT ?property ?propertyLabel ?count
WITH {
  SELECT DISTINCT ?property (COUNT(*) AS ?count) WHERE {
    ?item wdt:P31/wdt:P279* wd:Q7397 ;
    ?p [ ] .
    ?property a wikibase:Property;
                wikibase:claim ?p.
  }
  GROUP BY ?property 
  } AS %results WHERE {
  INCLUDE %results.
  SERVICE wikibase:label { bd:serviceParam wikibase:language "en". }
}
ORDER BY DESC(?count)
LIMIT 100

With properties like P2860 (cites), citation relationships can be expressed, possibly with qualifiers that provide more details.

Are there any "guinea pigs" of software citation? If so, we could try to see how Wikidata can represent these, and take things from there.

Support for software citation in citeproc / citation style language

Generic citations work fine, but if we want formatted citations for software to look different, we need

  • support for type ComputerProgram or Software in Citeproc
  • See whether any citation styles have specific recommendations for software, and whether they are implemented in the respective CSL styles

How to get citation metadata into repos?

In order for versions of software that have not been published by its authors to citable with full metadata, that metadata needs to be created and stored by the software authors. (See https://danielskatzblog.wordpress.com/2017/09/25/software-heritage-and-repository-metadata-a-software-citation-solution/)

The metadata could be stored in a codemeta.json file, or via the citation file format work.

In any case, the subject of this issue is how do we get authors to start doing this?

One method would be to create such a file automatically when a new repo is created. Would GitHub be willing to do this?

Or could this be combined with the standard README?

Automating metadata file maintenance (templated metadata with a metadata discovery and rendering service)

I'm approaching this working group from a software producer's perspective. At LSST we have a few hundred repositories on GitHub (https://github.com/lsst, https://github.com/lsst-dm, https://github.com/lsst-sqre, https://github.com/lsst-sims are our major GitHub organizations), and have a large group of people contributing to these repos. As much as possible, we rely on automation to move towards a continuous delivery ideal to ensure our code releases are reliable.

The idea of putting something like a CITATION file #2 or codemeta.json #4 in our repositories is great, and I think we're going that route. We especially like codemeta / JSON-LD because it means we can add LSST-specific metadata for our own internal purposes. At the same time, deploying codemeta.json at scale across all of our repositories could cause some maintenance challenges.

If LSST has 500 GitHub repositories, we'd have 500 codemeta.json files. And like documentation, it's sometimes difficult to rely on software developers in each project to keep that metadata accurate and up-to-date. For example, every time there is a new contributor we'd need to add them to codemeta.json. We might add a new code dependency, so we'd have to ensure the dependency metadata is up to date. Or at worst, every new commit on GitHub is in some sense a new release/version of the software for provenance purposes; it's not tractable to have a codemeta.json file committed to a repo reflect that sort of continuous versioning information.

A solution I'm interested in is combining codemeta.json metadata committed to a repository with metadata that's intrinsic to the repository itself. Things you can discover from a software repository are:

  • Name
  • Origin URL
  • Committers and reviewers (see also #8)
  • Version (Git commit, tag, or version embedded in a setup.py file for example; see also #16)
  • Date last modified
  • Dependencies
  • License
  • (and anything else that could be discovered from the source code, Git history, or a language specific metadata file like Python's setup.py or node.js's package.json)

Here's a system I'm envisioning:

  • Software repositories have a template metadata file. In that template metadata file, we put metadata that can't be discovered by any other means, like names of funding agencies, technical managers, and non-code contributors.
  • There's a web service (i.e., REST API) capable of generating a fully hydrated codemeta.json object on-demand for a Git repository at any Git ref. The web service inspects the Git repository for metadata and merges that metadata with the existing, manually maintained template metadata file.
  • When we make a code release, or even create a maintenance branch for a release on GitHub, we use the web service to render codemeta.json and commit that metadata into the Git repository/software distribution. Potentially the master branch could even carry the codemeta.json rendered from the latest release. This metadata rendering and committing happens automatically on the continuous integration server.

In some ways, this is similar to how we're approaching software documentation. Combining code and its documentation in the same repository help make a software product more self-contained from a developer's perspective and makes it easier to maintain versioned documentation. In the same way codemeta.json embedded in a repository is useful for maintaining versioned metadata. But we also rely on automation in a continuous integration service to help us produce, render, and validate the documentation (for example, generating an API reference by inspecting the code base and merging API signatures with human-written documentation strings).

I'm curious if others have thought about the maintenance of codemeta.json files at scale, and whether this approach is generally tractable?

A significant challenge is that the web service needs to know how to introspect the software. At LSST we have some non-standard practices for building software, so we'd need to implement a web service that knows about the LSST build system, in addition to standard Python PyPI packing, for example.

A spin-off of this approach is a "linting" service that runs in continuous integration and identifies when metadata in codemeta.json is out of date. In this case, a developer would still maintain codemeta.json manually, but would be forced to resolve metadata discrepancies before merging a PR.

comses.net public release

This issue is in response to the call to action on disseminating what we're working on.

We've officially migrated from our old Drupal 7 openabm.org app to www.comses.net and are working on:

  1. automatically generating codemeta based on the metadata collected during our codebase ingest process
  2. minting proper DOIs for the published computational models at our computational model library (https://www.comses.net/codebases) - we plan to have git repos behind the scenes and could publish to GitHub and use Zenodo but it would be less convoluted if we could mint directly via DataCite or some other provider. Our old arrangement minting handles at handle.net with the ASU libraries folk has fallen into disarray.
  3. peer review process for submitted computational artifacts (perhaps with badging ala https://www.acm.org/publications/policies/artifact-review-badging)
  4. usability & putting out lots of πŸ”₯

Codemeta Task Force

This is a placeholder for the CodeMeta taskforce

Tasks

  • Determine initial goals for task force
  • Gather volunteers

Creating tooling which generates a citation for a code repository

There are many situations where we wish to generate a citation for a piece of software being developed in a code repository such as GitHub, GitLab or BitBucket.

In this case, we must decide what metadata can be automatically harvested from the repo info (following CodeMeta concepts), what format the file (CITATION.md, codemeta.json) should be in.

Publisher Adoption Task Force

This task force will work with the publishing community to identify the processes, policies and infrastructure that need to be adapted to allow and encourage adoption of the software citation principles.

Tasks

  • Determine initial goals of task force
  • Identify contacts at publishers

What is the publisher?

This topic came up at See also codemeta/codemeta#140. For software source code, a number of organizations could be the publisher:

  • the organization making the code available, e.g. rOpenSci
  • the source code repository, e.g. GitHub
  • the archive, e.g. CRAN
  • the archived version, e.g. Zenodo

The publisher is important for citation styles, unless there is a container-title, as is the case for article-journal, which will be the fallback styling unless the citation style has something specific for software.

Develop a website/service/tool that generates a software citation in any citation format

To make it easier for people to cite software, we should provide tooling/services that enable people to generate a citation in any common citation format.

The Citation Styles project potentially provides a way to produce tooling that goes between different citation formats.

The tasks for this hack would be:

  • understand how codemeta and CSL interrelate
  • create a CSL schema for codemeta
  • investigate using tools based on CSL such as citeproc-js to convert between citation styles for a sample software citation written in codemeta.json (or indeed a different starter style)
  • develop a website (similar to http://www.citationmachine.net/ or http://www.citationconverter.com/) to enable someone to easily generate software citations in different styles
  • extend that website so that it could take a DOI for a software object (e.g. in Zenodo) or a GitHub repo URL (see #26) and generate a citation for it in any style

Software citation support in DataCite metadata

The DataCite Metadata WG is working on better support for software citation in the metadata. This will be part of the next release of the schema (4.1), planned for the end of the year.

Citation File Format update

This is re ACTION: All to open issues in GitHub to disseminate things they're working on as I had to miss the call unfortunately.

Progress

Plans

  • Mini-workshop at CW18
  • Expand the tools infrastructure, perhaps as part of further hack events
  • Further develop specs to include a "meta" module which can represent everything that's in CodeMeta
  • For my thesis, look at options for implementing transitive credit via CFF and/or other means

Headaches

  • Although CFF IMHO is a valid intermediary solution between no/unstructured metadata and a more ideal implementation, I still can't shake the feeling it might be duplicating too much effort, especially vis-a-vis CodeMeta!
    Perhaps it's good in that it's providing a low-threshold entry into providing citation metadata at all, and CFF can be transformed into CodeMeta JSONLD anyway via the crosswalk? See notes:

August: (in response to Dan's blog post) if we start parsing "soft" text based citations what will be the issues compared to using structured metadata? Dan: focussed on getting someone to do something, before they can do the right thing. As we can't wait to solve one bevore we solve the other.

Perhaps CFF (with crosswalkability) is a good starting point? Happy to discuss!

Minimal working example for largely _automated_ import, use & render of software citation?

Hello!

I read through https://research-software.org/citation/researchers/ and was left wondering whether there are already technical solutions to the three workflow steps of importing, using-in-a-manuscript and rendering a software citation? A beginner-friendly MVP so to speak, that does not rely on formatting a reference list manually, or fiddling with BibTeX item types or writing one's own CSL?

Importing works already well, if a BibTeX snippet is offered or a Zotero-translator. Example: R-Packages.js on CRAN.

Using is also fine, since many text editors and word processors can integrate well with most reference managers. Thus, inserting a citation into the doc works also fine.

However, when one actually wants to render the document, which BibTeX or CSL styles and processors are available that treat software in a minimally useful way? Such as rendering the version number from it's own field instead of a v1.2.3 appendix in the title or description?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.