The International Semantic Web Conference is the premier conference on the Semantic Web and Linked Data. Accordingly we expect the research track of the ISWC to be the premier venue to publish new, groundbreaking, exciting research on all things Semantic Web and Linked Data. Researchers from all over the world submit their papers to the conference, and we are working hard to make sure that the reviewing process is unbiased, fair, and leads to a result that rewards excellence, in order to set up a top-notch program and make the trip to Monterey worthwhile not only for its views and Aquarium, but also scientifically.

ISWC has a number of different tracks. This can be confusing on first glance, but this has evolved over the years in order to support a more fair reviewing process: it is hard to weight a paper with a solid research contribution against a paper describing a widely used, but rather pragmatic ontology. Both should have their place at ISWC – and the way to allow for this that has crystallized over the years is by creating a number of dedicated tracks. The chairs of the tracks will soon publish a guide to help you find your way to the right track.

Kalina and I are the chairs of the research track. We serve the conference and you by organizing the reviewing process and are ultimately responsible for the selection of the papers for the research track. The process has developed over many years, and we are thankful to last year’s chairs, Miriam and Claudia, for bringing us up to speed and sharing their insights. The criteria that are used by the research track are as follows:

  • Is the work appropriate for the ISWC? Does it, at its core, deal with the Semantic Web and / or Linked Data? Or would the paper be better served by being submitted to a different venue, and more likely get high-quality reviews and an increased impact upon presentation? Does the paper take possible ethical implications sufficiently into account? Is the work and the paper inclusive?
  • Is the paper innovative and original? Does it explore and test new ideas? Does it make claims and build theories which bring the field forward and answer questions that we didn’t have answers before?
  • What is the expected impact of these novel ideas? Does the work presented in the paper matter? Are there people who are actually asking the questions this paper answers? Are the results generalizable, or do they apply only under very narrow circumstances? Does the paper show that the results are indeed generalizable? Is the impact well argued for? It shouldn’t be the task of the reviewer to figure out whether the paper is expected to have impact – ideally, the authors will have collected the data to argue how their results are applicable and how much they change things.
  • Is the evaluation sound? Is it well performed? If the evaluation included human subjects, were the required protocols followed? Is there enough detail in the paper to replicate the results? Is there enough detail in the paper about the results? Are there links to more details that help with replication? Links to the full results? An ideal paper would not only allow, but make it actually easy for an interested researcher to replicate the result, to go and change parameters and methods and thus to be able to improve and work on top of the published paper with ease.
  • Are the ideas sound? Are they well implemented? Does the argument the paper makes make sense? In particular, does the evaluation actually support the novel claims and ideas? A frequent beginner’s problem is that the introduction of a paper makes grandiose claims about what it aims for, only to find the evaluation answering a related, but much less impactful question. Is the paper honest and explicit in what it tries to achieve, and does the work done support what the paper claims?
  • Is the relevant related work sufficiently complete? Or is there relevant work missing? Presenting related work in your paper has several important roles, not least the fact that this is how science works – not just a section you have to write. First, you need to present the work that your work builds upon. Not only to give credit and citations to that work – and citations are a crucial currency in today’s academic world – but also to allow the reader of the paper to understand what concepts and traditions you are building on. Second, you need to present work that might be perceived similar to yours, but is different. Such work can fall into two buckets: it can be either competing or complementary to your work. If it is competing, you want to show how you perform relative to the competing work. What are the trade offs under which one would choose your approach or the others? If it is complementary, you must explain how it complements existing work. What are the areas your work applies to that existing work doesn’t?
  • Last, and in some sense least – is the paper well written? Is the language clear, or is the reader struggling with the language and presentation? Are the figures clear, and are the formatting restrictions followed? It is not the task of the scientific reviewers to correct typos and grammar errors. We don’t expect perfect spelling and grammar, none of the program committee chairs are native English speaker, so we understand the pain – but please make an effort to make the paper understandable and unambiguous.

This should help you with understanding the review criteria better. If your paper is scored high on all of these criteria by the reviewers, you can be sure of an acceptance. Don’t forget – Abstract submission is March 30, and full papers are expected by April 6!

Kalina Bontcheva and Denny Vrandečić

Merrill Hall at the Asilomar Conference Grounds