OSINT & Zombie Journals—Part 3

Understanding the Need for Evaluation Methods & Citations

The basis of your evaluation method is understanding that you cannot know what you don’t know. However, as you come to terms with this, you can guard against the perils of the Dunning-Kruger Effect.

Dunning-Kruger Effect

There is a saying that “you cannot know what you do not know”. This might seem redundant, but it is also true as it might be impossible to identify gaps in our own knowledge. In other words, you cannot teach yourself what you do not know. Without instruction and training, you are very likely to think that you do, in fact, know “everything” you need to know, when you actually do not have the ability to recognize your mistakes – you are unconsciously incompetent. David Dunning and Justin Kruger first tested this phenomenon in a series of experiments in 1999[1].

Typically, the unskilled rate their ability as above average, much higher than it actually is, while the highly skilled underrate their abilities. Confidence is no substitute for skill and knowledge which must be used with confidence to ensure a positive outcome.

The Dunning-Kruger Effect may inhibit proper evaluation of the collected data without an established evaluation method. This article, and those that follow, should help you adopt an effective evaluation process.

Developing the necessary skills and knowledge is not ‘rocket science’, it is ‘time in grade’. You must simply do it, study how to do it better, and network with people who do it. This process takes years of effort but do not give up. I have been doing this type of research for 40 years and I am still learning. Now let’s set about reducing the Dunning-Kruger Effect.

Before beginning the evaluation of the collected data itself, the investigator must prepare accurate citations as the starting point for evaluation. The citation quantifies significant attributes of the data and its source.


A citation is a reference to a published or unpublished source though not always the original source.

Citations uphold intellectual honesty and avoid plagiarism. They provide attribution to the work and ideas of other people while allowing the reader to weigh the relevance and validity of the source material that the investigator employed.

Regardless of the citation style used, it must include the author(s), date of publication, title, and page numbers. Citations should also include any unique identifiers relevant to the type of material referenced.

The citation style you adopt will depend upon your clientele and the material being reported. If the report will include many citations, you should discuss the issue of citation style with your client before producing the report and your client should be familiar with that style, if at all possible.

Never use footnotes or endnotes for anecdotal information. This avoids having something masquerading as a citation of a source that only provides supplementary information. Supplementary information belongs in the body of the report where it is identified as such.

While doing OSINT, you might find a document from an organization that changes its name before you finish your report. In that case, the document was retrieved before the name change. How do you cite reference? Do you cite it with the old organization name or the new name?

Normal practice is to use the name as it was when you found the document, however, this can cause problems when someone does fact-checking to independently verify the citation. Someone must then find and document the history of the organization name.

The solution is to cite the date the document was retrieved and in square brackets include the new name, for example, [currently, XTS Organization] or better still, [as of 11 Jan 13 the name changed to XTS Organization]. The latter addition to the citation creates a dated history of the organization’s name. The dated history of a journal and its publisher is of critical importance when dealing with journals that die and come back as zombies. It is wise to check Jeffery Bealls’s list of predatory publishers while preparing citations. It is also wise to state when this list was checked in a footnote or in the actual citation as I now do.

Of course, all Web citations must include the date on which the URL was visited for the purpose it is being cited.

Bibliographic Databases

The large bibliographic abstract and citation databases are secondary sources that merely collect journal article abstracts and journal titles without much, or any, vetting of the article or journal.

Elsevier’s Scopus  is one such service, another is the Thompson Reuters Master Journal List. Do not consider either an authoritative source of quality journals or abstracts. Both contain numerous low-quality journals produced by predatory publishers.

Lars Bjørnshauge founded an online index open-access journals in 2003 with 300. Over the next decade, the open-access publishing market exploded. By 2014, the Directory of Open Access Journals (DOAJ) now operated by the non-profit company, IS4OA, had almost 10,000 journals. Today its main problem is not finding new publications to include, but keeping the predatory publishers out.

In 2014, following criticism of its quality-control process, DOAJ began asking all of the journals to reapply based on a stricter inclusion criterion in hopes of weeding-out predatory publishers. However, the question remains, how does DOAJ determine if the publisher is lying?

Attempts to create a ‘whitelist’ of journals seems doomed to failure, especially when attempted by a non-profit using volunteers. Most researchers will judge a journal’s quality by its inclusion in major citation databases, such as Elsevier’s Scopus index, rather than the DOAJ’s list. As you can see, Scopus and the Thompson Reuters Master Journal List are also vulnerable to manipulation by unscrupulous publishers.

Predatory publishers have realised that these lists offer a very low barrier to entry, especially in certain categories. In addition, as such databases are usually subscription services, some publishers advertise certain authors using fake citations supposedly from bibliographic databases knowing that certain commercially valuable demographics never verify these citations.

[1] Kruger, Justin; Dunning, David (1999). “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments”. Journal of Personality and Social Psychology, Vol 77(6), Dec 1999, 1121-1134.  http://psycnet.apa.org/?&fa=main.doiLanding&doi=10.1037/0022-3514.77.6.1121 (3 Oct 2016).

Jamming Remote Controls

The range of issues I get involved in always amazes me. I recently had a client ask me for solutions to garage door and gate remote jammers. Jammers are simple devices that transmit enough radio frequency noise to prevent a legitimate signal from activating the garage door or gate. These remote systems operate in the 300Mhz to 400Mhz range.

It seems an executive was car jacked when his gate remote didn’t work. He got out to unlock the gate manually and his car was stolen. The police speculated that the thief used a radio jammer to prevent the gate opening.

Research into solutions for this led me to jammers for car remotes. Most of these operate in the ranges of 433Mhz, 315Mhz, and 868Mhz.

No practical technical solution exists for this type of attack. My solution was to eliminate the gate and garage door remotes in favour of more advanced access control systems. The car remote is something else. Training people to stop using them is going to be a difficult task, but it may be necessary in some threat environments.

The car remote being jammed may expose the user to robbery, assassination, or abduction due to a delay while trying to open the car door. Worse, if the user doesn’t check if the door is locked after using the remote, then an explosive device could be planted in their vehicle and commanded to explode near a high-value target.

The best solution for this problem is a thorough understanding of the user’s threat environment.

A Brief History of Open Source Intelligence

An article with the above title appeared on the bellingcat site.

It is an excellent article even if I don’t agree that OSINT went into hibernation after WW2. For example, from after WW2 until the Cold War ended, in the US, the Foreign Broadcast Information Service, which is now the Open Source Enterprise and in the UK the BBC Monitoring Service trawled the airwaves, and other open sources regularly publishing transcripts and analysis of what they heard, starting after the war and continues today. There are many other examples.

On the other hand, today’s OSINT is highly influenced by a convergence of technologies. The market penetration of smartphones with 3G connections and the popularity of social media sites is one such convergence of technologies that produces raw data. The other convergence of technology is the availability of inexpensive software and computer hardware to process the raw data for analysis.

OSINT & Zombie Journals—Part 2

The Nature of Sources

Primary & Secondary Sources

An archive is a primary source because the contents are documents usually authored by a person with direct knowledge of the topic; this includes public records completed by the subject.

A library is a secondary source because its documents are created from the primary sources, as are citations, abstracts,  bibliographic databases, etc..

Authoritative Sources

Evaluating the quality of a source is to ask questions like:

  • What is the reputation of the data, and the data-provider (including the publisher)?
  • Has this source of data been cited elsewhere?
  • What is the reliability of the source?
  • How can the source of the information be documented or qualified?
  • Is this a primary source or secondary source?
  • Is this a legally required or legally binding source?

Answers to the above questions should help you find the authoritative source. Zombies are never authoritative sources.

In the next article I will discuss evaluation methods, citations, and bibliographic databases.

Proximity Search on Google-Free Wednesday

The international version of Yandex, the Russian search engine, has a collection of advanced commands that include a proximity operator that is extremely useful for drilling down to what your really want. For example, a search statement might be, “opec & saudi” (in same sentence) or “opec && saudi” (in same page).

There is also an /n operator that enables you to specify that words or phrases must appear within a certain distance of each other. For example, a search statment might be “opec saudi /3

An interesting operator is the non-ranking “and”, which is entered as “<<“: the words after the operator do not affect the ranking of the page in the results.

The search operators are listed at https://yandex.com/support/search/how-to-search/search-operators.xml.

OSINT & Zombie Journals—Part 1

Many scholarly journals are being bought-up by predatory publishers that turn once prestigious journals into publications full of junk science. Usually these publishers turn their acquisitions into free ‘open access’ publications on the Internet that are full of typos, inaccuracies, and even outright fabrications.

One such online publisher, the OMICS Group, is being sued by the U.S. Federal Trade Commission for deceptive practices that include spam emails to solicit articles that are not peer reviewed. This same outfit recently acquired two Canadian medical journal publishers.

From the researcher’s perspective, the most deceptive practice of these free open access journals is the fact that authors pay to have their articles published. The second deceptive practice, according to the FTC, is that such publishers falsely state that their journals are widely cited and included in academic databases. To the contrary, the FTC states that PubMed does not include any of the OMICS titles. The FTC also alleges that the work of authors is sometimes held hostage for payment of undisclosed fees.

When Jeffery Beall, an academic librarian at the University of Colorado, started compiling his list of predatory publishers, he found only 18—that was in 2010. Today, his list has over 1000 publishers.

When a predatory publisher acquires a journal, it ceases to be a scholarly journal and only lives on as something exploited for profit. Such an acquisition ends proper peer review. The journal becomes a zombie.

For the researcher conducting a literature review, the additional time and effort required to vet every article and citation to eliminate zombie journals has increased to nearly unbearable levels. Of course, this is part of the zombie strategy to flood the scholarly journal space with purulent, infectious zombies to kill-off real journals.

Zombie publications are a rising issue for serious researchers. The quality of a literature review affects the quality of the decisions based upon this collected data.

This series of articles is about recognising and avoiding open-source junk. These five articles should help you develop the evaluation skills and processes necessary to avoid falling victim to zombie journals and other forms of diseased data that infects the open-source domain.

Security & Shortened URLs

As we all know, clicking on a link can send us to digital purgatory. While I don’t worry about this when I am working in a VM, I do in a normal browsing session. This hunter doesn’t want to become the hunted.

The best advice, for general browsing, is to use the WOT browser pluggin available for Firefox and Chrome. This will deal with most problem links. While in a VM, I sometimes now do a manual scan of shortened URLs using VirusTotal.

A trusted collegue tells me, “the bad actors are beginning to step up their game now, some actually check the useragent string from the browser and will redirect you to malware and fool the link scanners.”

Hunting Elusive Prey on Instagram

Instagram is a photo-sharing site now owned by Facebook. It has about 400 million users. This often works in consort with Twitter to distribute photos.

I know of no way to search the posts of a user who has made his profile private. For the elusive private user profiles, I reverse the profile picture to find other accounts that use it, and go from there.

If the user updated his profile picture since 2015, then view the image and then remove the ‘s150x150′ from the thumbnail image URL and you may end-up at a full resolution version of the image–reverse search this image to find other social media accounts. The profile page may be private, but any posts that appear in Twitter, Tinder, or elsewhere are not.

Unfortunately, Instagram does not offer a true search facility. To search this, you must rely on traditional search engines and third party apps. For example, in Google and Bing, use the site: command.

The Instagram API was shut down this summer, but fortunately for investigators, this has not affected the third party apps mentioned below.

The Apple-only Photodesk app is the powerhouse for searching and managing Instagram. It allows you to perform the standard Instagram functions of sharing, liking and commenting, but the real value to investigators is the ability to search for content by keyword, tag and username.

When monitoring a current event, Photodesk offers the ability to search by location, and create a geofence around that position. This filters content to show only that within a certain radius showing the results on a map.

If you are not an Apple user, or don’t need the publishing features of Photodesk, then Picodash, which was formerly known as Gramfeed, is an alternative. Picodash offers the same advanced searches for Instagram content as Photodesk, enabling the user to search by hashtag, date/time, keyword, user, and location. I like this because it is easy to get stuff I find from it into a report. Of course, it isn’t free at $8 per month, but I think it’s worth it.

VPN Security & Firefox

When you’re hunting in the digital landscape, you don’t want to stand out like a white lion on the Serengeti.

PeerConnections are enabled by default in Firefox. This is a bad juju for me as enabling this can leak my IP address when using a VPN connection.

In Firefox, go to ‘about:config’ in the address bar. In the config window search for this setting and change it as follows:

  • media.peerconnection.enabled and doubleclick it to change the value to false.

As this is such bad juju, I check this to make sure it is set at false before I start any research project. Of course, I do this because I always use a VPN.

Hunting YouTube Content

A successful hunt for data includes dragging your prey home and preparing it for consumption. If you have a hungry client to feed, then you will have to chop-up your prey into digestible chunks, cook it properly, and then serve it up all pretty-like on a fancy platter, because clients are picky eaters.

Here is what you need to make a delightful repast of what you find on YouTube.

After the disappearance of Google Reader, Feedly became the new standard in RSS readers. However, Feedly is much more than an RSS reader. It allows you to collect and categorize YouTube accounts.

For example, you can monitor the YouTube accounts of politicians, activists, or anybody else who posts a lot of YouTube videos. You get the latest uploads to their YouTube accounts almost instantly. This continuous stream of updated content can be viewed and played in Feedly and does away with individual manual searches of known YouTube accounts.

Of course, Feedly has other uses, but the YouTube use is the greatest time saver. The time saved can be applied to summarizing the video content and analyzing it in terms of how it relates to your client’s objectives.

Inoreader is another feed reader that can organise YouTube account feeds into folders along with a limited number of feeds from Twitter, Facebook, Google+ and VKontakte. It also allows the user to gather bundles of subscriptions into one RSS feed and export them to another platform to go along with the YouTube content.

Just paste the URL of a YouTube video into Amnesty International’s YouTube Dataviewer to extract metadata from the videos. The tool reveals the exact upload time of a video and provides a thumbnail on which you can do a reverse image search. It also shows any other copies of the video on YouTube. Use this to track down the original video and the first instance of the video on YouTube.

A lot of fake videos appear on YouTube. Anything worth reporting needs to be examined to see if it is a possible fake. The Chrome browser extension Frame by Frame lets you change the playback speed or manually play through the frames. While this is the first step in uncovering a fake, it is however, an easy way to extract images from the video for inclusion into a report.

Of course, you will use the Download Helper browser extension, which is available for both Firefox and Chrome, to help download the videos. Just remember to set the maximum number of ‘concurrent downloads’ and ‘maximum varients’ to 20 and check ‘ignore protected varients’ to speed the process.

To make a long list of videos to download, you can use the browser extension, Copy All Links, or Link Klipper or Copy Links in Chrome, to make a list of the links to every video you find. In addition to using this list in your report, you can turn it into an HTML page and then let Download Helper work away on it for hours by downloading all the videos for you.

Collecting all this video is the easy part. Sitting through all of it to extract useful data and then analysing it to see how it helps or hinders your client’s interests is the painful and expensive part, but it is the only way cook-up what the client wants to eat.

Forcing Firefox to Open Links in a New Tab

During a training class I watched everybody trudge around looking for lost search results. They tried reloading results pages, only to get distored results. They kept losing the search engine results page and were getting lost in a sea of tabs. They wanted to know how to get “google search results” to open in a new tab.

Here is my solution for getting tabs to open where I want them to. In Firefox, go to ‘about:config’ in the address bar. In the config window search for these settings and change them as follows:

  • browser.search.openintab – if true, will open a search from the searchbar in a new tab if you use the return key to trigger the search
  • browser.tabs.loadBookmarksInBackground – if true, bookmarks that open in a new tab will not steal focus
  • browser.tabs.loadDivertedInBackground – Load the new tab in the background, leaving focus on the current tab if true
  • browser.tabs.loadInBackground – Do not focus new tabs opened from links (load in background) if true
  • browser.tabs.opentabfor.middleclick – if true, links can be forced to open a new tab if middle-clicked.

This is the type of ‘boring stuff’  that you must master if you want to do Investigative Internet Research and make any money at it. Clients won’t pay for wasted time. You may know where to hunt for data, but you need to also know how to get it into the larder before it goes bad.


Finding and verifying social media content is becoming a greater concern for private investigators (PIs) and their clients. Unfortunately, most PIs do not possess the skills and resources to do this beyond the most rudimentary level.

Some investigation companies will try to build an in-house operation. They will buy technology, or spend money on subscriptions to tools that claim to do the work with a click of a button. This usually proves to be a costly expedition into the unknown that ends in failure. The purchased tools do not live up to their claims or clients usually want something the purchased tools and subscriptions don’t deliver.

Some investigation companies will send staff to courses to learn about sources. These are billed as Open Source Intelligence (OSINT) courses. Unfortunately, the OSINT concept usually misses the “intelligence” part, and it is more about gathering raw information than producing usable investigative reporting.

The ‘intelligence’ part is the expensive part. It involves time to conduct the analysis and many hours of learning to present the analysis along with the sources and methods reporting.

Producing a report that goes beyond the OSINT concept is not a secretarial task. Once you go beyond the popular OSINT concept, you start doing Investigative Internet Research (IIR).

Why You Can’t Dictate an IIR Report

Proper IIR reporting does not rely on haphazard Internet searches and does not dump a disorganised load of raw data from the Internet into a client’s inbox. Reports summarize then analyse the collected data and then explain the sources and methods used to collect data.

The researcher must understand how to use Word and other software because he cannot dictate IIR reports. A dicta-typist cannot produce an IIR report for the following four reasons:

  1. The person transcribing the dictation will not place images, graphs, and video clips properly yet, a picture, screenshot or video is worth a thousand words.
  2. There is no efficiency at all in dictating a URL and plenty of mistakes would result.
  3. Some Web site names are hard to pronounce and would lead to misspelling (although you might spell them out, there is still a risk).
  4. Whoever writes the report must have all the collected material at hand in order to create footnotes and appendices.

Now you know why the person doing the IIR must also prepare the report.

In the next few articles I will describe the tools and techniques that actually work, but there is no magic button that does the analysis for you.

News–A Better Form of Gossip

Things like Reddit can add to the chaos and anxiety surrounding a fast moving event. For example, the sub-edit on the Boston bombing just added to the chaos.

Reddit is a major media property where others, who should know better, quote the observations of Reddit editors. However, it is really a platform like Twitter. It only does corrections AFTER something is published. This can and does wreck lives especially when the traditional media piles on to amplify the effect. 

Both Buzzfeed and Reddit falsely named a missing student as a Boston bomber, when in fact, he committed suicide before the bombing.

To many, Twitter looks up-to-date as erroneous data is retweeted repeatedly. A Tweet from the hacker collective Anonymous to its hundreds of thousands of followers illustrates this effect when it identified the deceased student and that Reddit had discovered his identity. The denizens of heavily trafficked corners of the Internet quickly accepted that the deceased student was one of the people responsible, and that Reddit was the first to uncover this.

Journalists fed the rumor mill and jumped to conclusions in their reporting. This only fed the frenzy. Eventually, authorities found the body of the missing student and proved that he committed suicide before the bombing.

The Mac & Malware

Like many Mac users, I’m not too concerned about malware. Traditionally, the vast majority of these were directed at Microsoft OS platforms. But recent headlines prompted me to consider two pieces of Mac software: Avast Mac Security and Malwarebytes for Mac.

Malwarebytes seems particularily useful if you download software from questionable sources. I’m still not certain AV software is really needed.