OSINT & Zombie Journals–Part 5

To improve your evaluation skills, develop three abilities:

  1. Maintain a skeptical mind-set.
  2. Learn which sources are most trustworthy.
  3. Learn to identify reporting errors and inconsistencies. If something does not look right, investigate its veracity.

In addition, the skeptical investigator must develop a structured means of evaluating the relevance and reliability of the collected data along with the ability to communicate this. In my opinion, this is the most demanding part of any investigation.

The following list of 13 evaluation criteria has developed over decades of practice by researchers and investigators worldwide. This list of evaluation criteria appears in one form or another throughout the literature on research and intelligence analysis. I use a Microsoft Excel spread sheet based upon them to record the evaluation of sources for most investigations. Doing this is time-consuming, but it is often necessary to maintain the integrity of the reported data and the conclusions that are drawn from it.

The Internet is renowned for harbouring unreliable information. The following evaluation matrix will work for you if you rigorously apply it.

Evaluation Matrix

  1. Recency.Do the data appear to be current on the subject or the most appropriate for the historical time period? Are sources dated, and maintained?
  2. Relevancy. Is there a direct correlation to the subject? What is the tone of the information display? Is it popular, scholarly, or technical?
  3. Authority. What is the reputation of the data, and the data-provider? Has this source of data been cited elsewhere? What is the reliability of the source? How can you document or qualify the source of the information?
  4. Completeness. Are alternative data or views cited? Are references provided? Given what you know about the source, is there any evidence that the data is NOT ‘slanted’?
  5. Accuracy. Does the source furnish background information about data sources and/or in-­depth data? Are the complex issues of data integrity and validity oversimplified? Are the terms adequately defined? Are there references or sources of quotes?
  6. Clarity. Is the presentation of information really credible? Can bias of the information providers really be ruled out? Are there any logical fallacies in presentation of the data or assertions, or in other statements or publications on the page or by the source? Are key assumptions described, or are they hidden in the hope that the reader will be gullible?
  7. Verifiability. Can the information be verified? Can you find corroboration? If not, why not?
  8. Statistical validity. Can the key points or critical data be supported by standard statistical testing? With subjective information, one verifies by corroboration as it can be found. With numerical information, many questions arise. What statistical inference is needed in order to accept any implied inferences of the data displayed? Are there clear explanations for readers or viewers to qualify the implications of numerical “averages” or “percentages?”
  9. Internal consistency. Do the data or commentary contain internal contradictions? Know what you can about the source, and scan for logical fallacies throughout the presentation.
  10. External consistency. Do the data reflect any contradictions among the source documents? In the assertion of information or views, is there an acknowledgment or recognition of alternative views or sources? If not, and source documents are involved, one might suspect that the author had an “agenda”.
  11. Context. Can fact be distinguished from opinion? Are sources taken, or used out of context?
  12. Comparative quality. Are some data clearly inferior to other data? Which are the “best” data when you consider the above eleven tests ( i.e., most recent, most relevant, the most authoritative, the most complete, the most accurate and so forth). You should always be evaluating the information as you browse the Net. Check the little things that journalists watch for. A misspelled name, for example, could be a warning sign, even in an academic paper, that the author was careless in other areas as well. Do any statements seem exaggerated? If so, why has the author exaggerated? Is it a spur­-of-­the­moment statement by e­mail, or is that exaggeration more deliberate? Are you reading instant analysis, quick off the mark, or the results of a carefully crafted study, or a meta­analysis, which is a study of all previous studies on a subject? What do you think has been left out of  the report? What an author omits may be just as important to a researcher as what an author includes. What’s left out could reveal much about the bias of the information you are reading. Take notes on such matters as you find sources, and possibly include footnotes in your paper, if that’s what you’re doing; it’ll provide evidence that you have critical thinking abilities! Don’t ignore bias as a valuable source of information, as well: Even an untrustworthy source is valuable for what it reveals about the personality of an author, especially if he or she is an actor in the events.
  13. Problems.There’s another problem, growing in the 1990s, called by some the Crossfire Syndrome, after Crossfire, the CNN public affairs show and its tabloid television imitators. The Crossfire Syndrome drowns out the moderate voices in favor of polarization and polemics. Confrontation between polar opposites may make for good television, but it often paints a distorted view of reality. On the Net, and throughout North American media culture, the Crossfire Syndrome is acute. In flame wars, the shouters on both sides are left to use up bandwidth while the moderate voices back off. Political correctness of any sort, right or left, religious or secular, tends to distort material while creating interest, at the cost of ‘truth’. All of us are left with the task, with nearly any information we’re exposed to, of seeking out the facts behind the bias. It’s been said many times that there is no such thing as objectivity. The honest researcher tries to be fair. The best researcher is both prosecutor and defense attorney, searching out the facts on all sides (there are often more than two).

The advent of the 24-hour all-news network, along with a procession of imitators and the Web. The CNN Effect lives off of one continually asked question, ‘how bad can it get?’ as a way to maintain viewer interest. This severely distorts the information presented.

A Test

If you want to test what you have learned here is an example.

You are researching how IT operations impact carbon dioxide emissions. In the data you uncover, you find the following journal article: Towards Green CommunıcatıonsWhat would your evaluation of this article be?

OSINT & Zombie Journals–Part 4

Always keep in mind that, as the old pessimist philosopher Arnold Schopenhauer stated, “The truth will set you free–but first it will make you miserable.”

The RICE Method of Evaluation

Use the RICE method can help you decide on how to respond to information or intelligence:

  • R for reliability. The basic truthfulness or accuracy of the information you are evaluating.
  • I for the importance of the data based upon its relevance
  • C for the cost of the data (and your possible reactions or actions relating to the information)
  • E for the effectiveness of your actions if based upon this information. Would actions based upon this information solve the problems you face?

This evaluation format is useful for summarizing collected data and for analyzing how you might apply the data in a broad range of situations. However, it does not delve into specific metrics used to evaluate a particular piece of information. The next article will provide 13 evaluation metrics to help you do that.

The term metrics means in this context:  a system of related measures that facilitates the quantification of some particular characteristic.

Reliability

Reliability is the basic truthfulness or accuracy of the information. When evaluating data from journals you may not have the technical knowledge to evaluate the content itself. However, you do have the ability to compare the date of publication to the ownership of the journal. You also have the ability to compile a list of the author’s previous journal articles, their date of publication, the names of the journals and the publisher or owner of the journal. You can also identify conferences that the author attended or at which he was a speaker. You can then determine who ran or supported these conferences.

Predatory publishers often create or sponsor conferences to showcase their authors who may pay to be published. The conference may be part of a paid for publication package. Sometimes, they advertise a conference and take in attendance fees, and then the conference never occurs.

It is sometimes difficult to link a conference to a predatory publisher. You have to look at the page source code and the conference domain registrations and the list of speakers to tie the conference to a predatory publisher.

Cost

Cost is an important factor to consider from two perspectives. First, the obvious cost/benefit relationship or more precisely, the question, is this worth the cost? Second, if the data seem too good to be offered free, then you have to ask, why is this free?

As you now know, open access journals don’t make their money from subscriptions. You must uncover and then evaluate the relationship between how the journal makes its money and the authors who produce the content. Of course, this evaluation metric is not unique to academic and open access journals.

Part 5 will discuss a more detailed evaluation matrix.

OSINT & Zombie Journals—Part 3

Understanding the Need for Evaluation Methods & Citations

The basis of your evaluation method is understanding that you cannot know what you don’t know. However, as you come to terms with this, you can guard against the perils of the Dunning-Kruger Effect.

Dunning-Kruger Effect

There is a saying that “you cannot know what you do not know”. This might seem redundant, but it is also true as it might be impossible to identify gaps in our own knowledge. In other words, you cannot teach yourself what you do not know. Without instruction and training, you are very likely to think that you do, in fact, know “everything” you need to know, when you actually do not have the ability to recognize your mistakes – you are unconsciously incompetent. David Dunning and Justin Kruger first tested this phenomenon in a series of experiments in 1999[1].

Typically, the unskilled rate their ability as above average, much higher than it actually is, while the highly skilled underrate their abilities. Confidence is no substitute for skill and knowledge which must be used with confidence to ensure a positive outcome.

The Dunning-Kruger Effect may inhibit proper evaluation of the collected data without an established evaluation method. This article, and those that follow, should help you adopt an effective evaluation process.

Developing the necessary skills and knowledge is not ‘rocket science’, it is ‘time in grade’. You must simply do it, study how to do it better, and network with people who do it. This process takes years of effort but do not give up. I have been doing this type of research for 40 years and I am still learning. Now let’s set about reducing the Dunning-Kruger Effect.

Before beginning the evaluation of the collected data itself, the investigator must prepare accurate citations as the starting point for evaluation. The citation quantifies significant attributes of the data and its source.

Citations

A citation is a reference to a published or unpublished source though not always the original source.

Citations uphold intellectual honesty and avoid plagiarism. They provide attribution to the work and ideas of other people while allowing the reader to weigh the relevance and validity of the source material that the investigator employed.

Regardless of the citation style used, it must include the author(s), date of publication, title, and page numbers. Citations should also include any unique identifiers relevant to the type of material referenced.

The citation style you adopt will depend upon your clientele and the material being reported. If the report will include many citations, you should discuss the issue of citation style with your client before producing the report and your client should be familiar with that style, if at all possible.

Never use footnotes or endnotes for anecdotal information. This avoids having something masquerading as a citation of a source that only provides supplementary information. Supplementary information belongs in the body of the report where it is identified as such.

While doing OSINT, you might find a document from an organization that changes its name before you finish your report. In that case, the document was retrieved before the name change. How do you cite reference? Do you cite it with the old organization name or the new name?

Normal practice is to use the name as it was when you found the document, however, this can cause problems when someone does fact-checking to independently verify the citation. Someone must then find and document the history of the organization name.

The solution is to cite the date the document was retrieved and in square brackets include the new name, for example, [currently, XTS Organization] or better still, [as of 11 Jan 13 the name changed to XTS Organization]. The latter addition to the citation creates a dated history of the organization’s name. The dated history of a journal and its publisher is of critical importance when dealing with journals that die and come back as zombies. It is wise to check Jeffery Bealls’s list of predatory publishers while preparing citations. It is also wise to state when this list was checked in a footnote or in the actual citation as I now do.

Of course, all Web citations must include the date on which the URL was visited for the purpose it is being cited.

Bibliographic Databases

The large bibliographic abstract and citation databases are secondary sources that merely collect journal article abstracts and journal titles without much, or any, vetting of the article or journal.

Elsevier’s Scopus  is one such service, another is the Thompson Reuters Master Journal List. Do not consider either an authoritative source of quality journals or abstracts. Both contain numerous low-quality journals produced by predatory publishers.

Lars Bjørnshauge founded an online index open-access journals in 2003 with 300. Over the next decade, the open-access publishing market exploded. By 2014, the Directory of Open Access Journals (DOAJ) now operated by the non-profit company, IS4OA, had almost 10,000 journals. Today its main problem is not finding new publications to include, but keeping the predatory publishers out.

In 2014, following criticism of its quality-control process, DOAJ began asking all of the journals to reapply based on a stricter inclusion criterion in hopes of weeding-out predatory publishers. However, the question remains, how does DOAJ determine if the publisher is lying?

Attempts to create a ‘whitelist’ of journals seems doomed to failure, especially when attempted by a non-profit using volunteers. Most researchers will judge a journal’s quality by its inclusion in major citation databases, such as Elsevier’s Scopus index, rather than the DOAJ’s list. As you can see, Scopus and the Thompson Reuters Master Journal List are also vulnerable to manipulation by unscrupulous publishers.

Predatory publishers have realised that these lists offer a very low barrier to entry, especially in certain categories. In addition, as such databases are usually subscription services, some publishers advertise certain authors using fake citations supposedly from bibliographic databases knowing that certain commercially valuable demographics never verify these citations.

[1] Kruger, Justin; Dunning, David (1999). “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments”. Journal of Personality and Social Psychology, Vol 77(6), Dec 1999, 1121-1134.  http://psycnet.apa.org/?&fa=main.doiLanding&doi=10.1037/0022-3514.77.6.1121 (3 Oct 2016).

OSINT & Zombie Journals—Part 2

The Nature of Sources

Primary & Secondary Sources

An archive is a primary source because the contents are documents usually authored by a person with direct knowledge of the topic; this includes public records completed by the subject.

A library is a secondary source because its documents are created from the primary sources, as are citations, abstracts,  bibliographic databases, etc..

Authoritative Sources

Evaluating the quality of a source is to ask questions like:

  • What is the reputation of the data, and the data-provider (including the publisher)?
  • Has this source of data been cited elsewhere?
  • What is the reliability of the source?
  • How can the source of the information be documented or qualified?
  • Is this a primary source or secondary source?
  • Is this a legally required or legally binding source?

Answers to the above questions should help you find the authoritative source. Zombies are never authoritative sources.

In the next article I will discuss evaluation methods, citations, and bibliographic databases.

OSINT & Zombie Journals—Part 1

Many scholarly journals are being bought-up by predatory publishers that turn once prestigious journals into publications full of junk science. Usually these publishers turn their acquisitions into free ‘open access’ publications on the Internet that are full of typos, inaccuracies, and even outright fabrications.

One such online publisher, the OMICS Group, is being sued by the U.S. Federal Trade Commission for deceptive practices that include spam emails to solicit articles that are not peer reviewed. This same outfit recently acquired two Canadian medical journal publishers.

From the researcher’s perspective, the most deceptive practice of these free open access journals is the fact that authors pay to have their articles published. The second deceptive practice, according to the FTC, is that such publishers falsely state that their journals are widely cited and included in academic databases. To the contrary, the FTC states that PubMed does not include any of the OMICS titles. The FTC also alleges that the work of authors is sometimes held hostage for payment of undisclosed fees.

When Jeffery Beall, an academic librarian at the University of Colorado, started compiling his list of predatory publishers, he found only 18—that was in 2010. Today, his list has over 1000 publishers.

When a predatory publisher acquires a journal, it ceases to be a scholarly journal and only lives on as something exploited for profit. Such an acquisition ends proper peer review. The journal becomes a zombie.

For the researcher conducting a literature review, the additional time and effort required to vet every article and citation to eliminate zombie journals has increased to nearly unbearable levels. Of course, this is part of the zombie strategy to flood the scholarly journal space with purulent, infectious zombies to kill-off real journals.

Zombie publications are a rising issue for serious researchers. The quality of a literature review affects the quality of the decisions based upon this collected data.

This series of articles is about recognising and avoiding open-source junk. These five articles should help you develop the evaluation skills and processes necessary to avoid falling victim to zombie journals and other forms of diseased data that infects the open-source domain.