searchcode | source code search engine


searchcode | source code search engine searchcode logo

Search 20 billion lines of code from 11 million projects

Helping you find real world examples of functions, API’s and libraries in 243 languages across 10+ public code sources

Search over 20 billion lines of code from 7,000,000 projects

Search Using Special Characters

You can search using special characters

Filter by Source Language or Repository

Filter down to one or many sources such as github, bitbucket, android or by language or repository.

Source

Building image search an engine using Python and OpenCV

[]

Let’s face it. Trying to search for images based on text and tags sucks.

Whether you are tagging and categorizing your personal images, searching for stock photos for your company website, or simply trying to find the right image for your next epic blog post, trying to use text and keywords to describe something that is inherently visual is a real pain.

I faced this pain myself last Tuesday as I was going through some old family photo albums there were scanned and digitized nine years ago.

You see, I was looking for a bunch of photos that were taken along the beaches of Hawaii with my family. I opened up iPhoto, and slowly made my way through the photographs. It was a painstaking process. The meta-information for each JPEG contained incorrect dates. The photos were not organized in folders like I remembered — I simply couldn’t find the beach photos that I was desperately searching for.

Perhaps by luck, I stumbled across one of the beach photographs. It was a beautiful, almost surreal beach shot. Puffy white clouds in the sky. Crystal clear ocean water, lapping at the golden sands. You could literally feel the breeze on your skin and smell the ocean air.

After seeing this photo, I stopped my manual search and opened up a code editor.

While applications such as iPhoto let you organize your photos into collections and even detect and recognize faces, we can certainly do more.

No, I’m not talking about manually tagging your images. I’m talking about something more powerful. What if you could actually search your collection of images using an another image?

Wouldn’t that be cool? It would allow you to apply visual search to your own images, in just a single click.

And that’s exactly what I did. I spent the next half-hour coding and when I was done I had created a visual search engine for my family vacation photos.

I then took the sole beach image that I found and then submitted it to my image search engine. Within seconds I had found all of the other beach photos, all without labeling or tagging a single image.

Sound interesting? Read on.

In the rest of this blog post I’ll show you how to build an image search engine of your own.

Looking for the source code to this post?

Jump Right To The Downloads Section

So you’re probably wondering, what actually is an image search engine?

I mean, we’re all familiar with text based search engines such as Google, Bing, and DuckDuckGo — you simply enter a few keywords related to the content you want to find (i.e., your “query”), and then your results are returned to you. But for image search engines, things work a little differently — you’re not using text as your query, you are instead using an image.

Sounds pretty hard to do, right? I mean, how do you quantify the contents of an image to make it search-able?

We’ll cover the answer to that question in a bit. But to start, let’s learn a little more about image search engines.

In general, there tend to be three types of image search engines: search by meta-data, search by example, and a hybrid approach of the two.

Search by Meta-Data

Figure 1: Example of a search by meta-deta image search engine. Notice how keywords and tags are manually attributed to the image.Figure 1: Example of a search by meta-deta image search engine. Notice how keywords and tags are manually attributed to the image.

Searching by meta-data is only marginally different than your standard keyword-based search engines mentioned above. Search by meta-data systems rarely examine the contents of the image itself. Instead, they rely on textual clues such as (1) manual annotations and tagging performed by humans along with (2) automated contextual hints, such as the text that appears near the image on a webpage.

When a user performs a search on a search by meta-data system they provide a query, just like in a traditional text search engine, and then images that have similar tags or annotations are returned.

Again, when utilizing a search by meta-data system the actual image itself is rarely examined.

A great example of a Search by Meta-Data image search engine is Flickr. After uploading an image to Flickr you are presented with a text field to enter tags describing the contents of images you have uploaded. Flickr then takes these keywords, indexes them, and utilizes them to find and recommend other relevant images.

Search by Example

Figure 2: TinEye is an example of a Figure 2: TinEye is an example of a “search by example” image search engine. The contents of the image itself are used to perform the search rather than text.

Search by example systems, on the other hand, rely solely on the contents of the image — no keywords are assumed to be provided. The image is analyzed, quantified, and stored so that similar images are returned by the system during a search.

Image search engines that quantify the contents of an image are called Content-Based Image Retrieval (CBIR) systems. The term CBIR is commonly used in the academic literature, but in reality, it’s simply a fancier way of saying “image search engine”, with the added poignancy that the search engine is relying strictly on the contents of the image and not any textual annotations associated with the image.

A great example of a Search by Example system is TinEye. TinEye is actually a reverse image search engine where you provide a query image, and then TinEye returns near-identical matches of the same image, along with the webpage that the original image appeared on.

Take a look at the example image at the top of this section. Here I have uploaded an image of the Google logo. TinEye has examined the contents of the image and returned to me the 13,000+ webpages that the Google logo appears on after searching through an index of over 6 billion images.

So consider this: Are you going to manually label each of these 6 billion images in TinEye? Of course not. That would take an army of employees and would be extremely costly.

Instead, you utilize some sort of algorithm to extract “features” (i.e., a list of numbers to quantify and abstractly represent the image) from the image itself. Then, when a user submits a query image, you extract features from the query image and compare them to your database of features and try to find similar images.

Again, it’s important to reinforce the point that Search by Example systems rely strictly on the contents of the image. These types of systems tend to be extremely hard to build and scale, but allow for a fully automated algorithm to govern the search — no human intervention is required.

Hybrid Approach

Figure 3: A hybrid image search engine can take into account both text and images.Figure 3: A hybrid image search engine can take into account both text and images.

Of course, there is a middle ground between the two – consider Twitter, for instance.

On Twitter you can upload photos to accompany your tweets. A hybrid approach would be to correlate the features extracted from the image with the text of the tweet. Using this approach you could build an image search engine that could take both contextual hints along with a Search by Example strategy.

Note: Interested in reading more about the different types of image search engines? I have an entire blog post dedicated to comparing and contrasting them, available here.

Let’s move on to defining some important terms that we’ll use regularly when describing and building image search engines.

Before we get too in-depth, let’s take a little bit of time to define a few important terms.

When building an image search engine we will first have to index our dataset. Indexing a dataset is the process of quantifying our dataset by utilizing an image descriptor to extract features from each image.

An image descriptor defines the algorithm that we are utilizing to describe our image.

For example:

  • The mean and standard deviation of each Red, Green, and Blue channel, respectively,
  • The statistical moments of the image to characterize shape.
  • The gradient magnitude and orientation to describe both shape and texture.

The important takeaway here is that the image descriptor governs how the image is quantified.

Features, on the other hand, are the output of an image descriptor. When you put an image into an image descriptor, you will get features out the other end.

In the most basic terms, features (or feature vectors) are just a list of numbers used to abstractly represent and quantify images.

Take a look at the example figure below:

Figure 4: The pipeline of an image descriptor. An input image is presented to the descriptor, the image descriptor is applied, and a feature vector (i.e a list of numbers) is returned, used to quantify the contents of the image.Figure 4: The pipeline of an image descriptor. An input image is presented to the descriptor, the image descriptor is applied, and a feature vector (i.e a list of numbers) is returned, used to quantify the contents of the image.

Here we are presented with an input image, we apply our image descriptor, and then our output is a list of features used to quantify the image.

Feature vectors can then be compared for similarity by using a distance metric or similarity function. Distance metrics and similarity functions take two feature vectors as inputs and then output a number that represents how “similar” the two feature vectors are.

The figure below visualizes the process of comparing two images:

Figure 5: To compare two images, we input the respective feature vectors into a distance metric/similarity function. The output is a value used to represent and quantify how Figure 5: To compare two images, we input the respective feature vectors into a distance metric/similarity function. The output is a value used to represent and quantify how “similar” the two images are to each other..

Given two feature vectors, a distance function is used to determine how similar the two feature vectors are. The output of the distance function is a single floating point value used to represent the similarity between the two images.

No matter what Content-Based Image Retrieval System you are building, they all can be boiled down into 4 distinct steps:

  1. Defining your image descriptor: At this phase you need to decide what aspect of the image you want to describe. Are you interested in the color of the image? The shape of an object in the image? Or do you want to characterize texture?
  2. Indexing your dataset: Now that you have your image descriptor defined, your job is to apply this image descriptor to each image in your dataset, extract features from these images, and write the features to storage (ex. CSV file, RDBMS, Redis, etc.) so that they can be later compared for similarity.
  3. Defining your similarity metric: Cool, now you have a bunch of feature vectors. But how are you going to compare them? Popular choices include the Euclidean distance, Cosine distance, and chi-squared distance, but the actual choice is highly dependent on (1) your dataset and (2) the types of features you extracted.
  4. Searching: The final step is to perform an actual search. A user will submit a query image to your system (from an upload form or via a mobile app, for instance) and your job will be to (1) extract features from this query image and then (2) apply your similarity function to compare the query features to the features already indexed. From there, you simply return the most relevant results according to your similarity function.

Again, these are the most basic 4 steps of any CBIR system. As they become more complex and utilize different feature representations, the number of steps grow and you’ll add a substantial number of sub-steps to each step mentioned above. But for the time being, let’s keep things simple and utilize just these 4 steps.

Let’s take a look at a few graphics to make these high-level steps a little more concrete. The figure below details Steps 1 and 2:

Figure 6: A flowchart representing the process of extracting features from each image in the dataset.Figure 6: A flowchart representing the process of extracting features from each image in the dataset.

We start by taking our dataset of images, extracting features from each image, and then storing these features in a database.

We can then move on to performing a search (Steps 3 and 4):

Figure 7: Performing a search on a CBIR system. A user submits a query, the query image is described, the query features are compared to existing features in the database, results are sorted by relevancy and then presented to the user.Figure 7: Performing a search on a CBIR system. A user submits a query, the query image is described, the query features are compared to existing features in the database, results are sorted by relevancy and then presented to the user.

First, a user must submit a query image to our image search engine. We then take the query image and extract features from it. These “query features” are then compared to the features of the images we already indexed in our dataset. Finally, the results are then sorted by relevancy and presented to the user.

We’ll be utilizing the INRIA Holidays Dataset for our dataset of images.

This dataset consists of various vacation trips from all over the world, including photos of the Egyptian pyramids, underwater diving with sea-life, forests in the mountains, wine bottles and plates of food at dinner, boating excursions, and sunsets across the ocean.

Here are a few samples from the dataset:

Figure 8: Example images from the INRIA Holidays Dataset. We'll be using this dataset to build our image search engine.Figure 8: Example images from the INRIA Holidays Dataset. We’ll be using this dataset to build our image search engine.

In general, this dataset does an extremely good job at modeling what we would expect a tourist to photograph on a scenic trip.

Our goal here is to build a personal image search engine. Given our dataset of vacation photos, we want to make this dataset “search-able” by creating a “more like this” functionality — this will be a “search by example” image search engine. For instance, if I submit a photo of sail boats gliding across a river, our image search engine should be able to find and retrieve our vacation photos of when we toured the marina and docks.

Take a look at the example below where I have submitted an photo of the boats on the water and have found relevant images in our vacation photo collection:

Figure 9: An example of our image search engine. We submit a query image containing boats and the sea. The results returned to us are relevant since they too contain both boats and the sea.Figure 9: An example of our image search engine. We submit a query image containing boats on the sea. The results returned to us are relevant since they too contain both boats and the sea.

In order to build this system, we’ll be using a simple, yet effective image descriptor: the color histogram.

By utilizing a color histogram as our image descriptor, we’ll be we’ll be relying on the color distribution of the image. Because of this, we have to make an important assumption regarding our image search engine:

Assumption: Images that have similar color distributions will be considered relevant to each other. Even if images have dramatically different contents, they will still be considered “similar” provided that their color distributions are similar as well.

This is a really important assumption, but is normally a fair and reasonable assumption to make when using color histograms as image descriptors.

Instead of using a standard color histogram, we are going to apply a few tricks and make it a little more robust and powerful.

Our image descriptor will be a 3D color histogram in the HSV color space (Hue, Saturation, Value). Typically, images are represented as a 3-tuple of Red, Green, and Blue (RGB). We often think of the RGB color space as “cube”, as shown below:

Figure 10: Example of the RGB cube.Figure 10: Example of the RGB cube.

However, while RGB values are simple to understand, the RGB color space fails to mimic how humans perceive color. Instead, we are going to use the HSV color space which maps pixel intensities into a cylinder:

Figure 11: Example of the HSV cylinder.Figure 11: Example of the HSV cylinder.

There are other color spaces that do an even better job at mimicking how humans perceive color, such as the CIE L*a*b* and CIE XYZ spaces, but let’s keep our color model relatively simple for our first image search engine implementation.

So now that we have selected a color space, we now need to define the number of bins for our histogram. Histograms are used to give a (rough) sense of the density of pixel intensities in an image. Essentially, our histogram will estimate the probability density of the underlying function, or in this case, the probability P of a pixel color C occurring in our image I.

It’s important to note that there is a trade-off with the number of bins you select for your histogram. If you select too few bins, then your histogram will have less components and unable to disambiguate between images with substantially different color distributions. Likewise, if you use too many bins your histogram will have many components and images with very similar contents may be regarded and “not similar” when in reality they are.

Here’s an example of a histogram with only a few bins:

Figure 12: An example of a 9-bin histogram. Notice how there are very few bins for a given pixel to be placed in.Figure 12: An example of a 9-bin histogram. Notice how there are very few bins for a given pixel to be placed in.

Notice how there are very few bins that a pixel can be placed into.

And here’s an example of a histogram with lots of bins:

Figure 13: An example of a 128-bin histogram. Notice how there are many bins that a given pixel can be placed in.Figure 13: An example of a 128-bin histogram. Notice how there are many bins that a given pixel can be placed in.

In the above example you can see that many bins are utilized, but with the larger number of bins, you lose your ability to “generalize” between images with similar perceptual content since all of the peaks and valleys of the histogram will have to match in order for two images to be considered “similar”.

Personally, I like an iterative, experimental approach to tuning the number of bins. This iterative approach is normally based on the size of my dataset. The more smaller that my dataset is, the less bins I use. And if my dataset is large, I use more bins, making my histograms larger and more discriminative.

In general, you’ll want to experiment with the number of bins for your color histogram descriptor as it is dependent on (1) the size of your dataset and (2) how similar the color distributions in your dataset are to each other.

For our vacation photo image search engine, we’ll be utilizing a 3D color histogram in the HSV color space with 8 bins for the Hue channel, 12 bins for the saturation channel, and 3 bins for the value channel, yielding a total feature vector of dimension 8 x 12 x 3 = 288.

This means that for every image in our dataset, no matter if the image is 36 x 36 pixels or 2000 x 1800 pixels, all images will be abstractly represented and quantified using only a list of 288 floating point numbers.

I think the best way to explain a 3D histogram is to use the conjunctive AND. A 3D HSV color descriptor will ask a given image how many pixels have a Hue value that fall into bin #1 AND how many pixels have a Saturation value that fall into bin #1 AND how many pixels have a Value intensity that fall into bin #1. The number of pixels that meet these requirements are then tabulated. This process is repeated for each combination of bins; however, we are able to do it in an extremely computationally efficient manner.

Pretty cool, right?

Anyway, enough talk. Let’s get into some code.

Open up a new file in your favorite code editor, name it colordescriptor.py  and let’s get started:

# import the necessary packages import numpy as np import cv2 import imutils class ColorDescriptor: def __init__(self, bins): # store the number of bins for the 3D histogram self.bins = bins def describe(self, image): # convert the image to the HSV color space and initialize # the features used to quantify the image image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) features = [] # grab the dimensions and compute the center of the image (h, w) = image.shape[:2] (cX, cY) = (int(w * 0.5), int(h * 0.5))

We’ll start by importing the Python packages we’ll need. We’ll use NumPy for numerical processing, cv2  for our OpenCV bindings, and imutils  to check OpenCV version.

We then define our ColorDescriptor  class on Line 6. This class will encapsulate all the necessary logic to extract our 3D HSV color histogram from our images.

The __init__  method of the ColorDescriptor  takes only a single argument, bins , which is the number of bins for our color histogram.

We can then define our describe  method on Line 11. This method requires an image , which is the image we want to describe.

Inside of our describe  method we’ll convert from the RGB color space (or rather, the BGR color space; OpenCV represents RGB images as NumPy arrays, but in reverse order) to the HSV color space, followed by initializing our list of features  to quantify and represent our image .

Lines 18 and 19 simply grab the dimensions of the image and compute the center (x, y) coordinates.

So now the hard work starts.

Instead of computing a 3D HSV color histogram for the entire image, let’s instead compute a 3D HSV color histogram for different regions of the image.

Using regions-based histograms rather than global-histograms allows us to simulate locality in a color distribution. For example, take a look at this image below:

Figure 14: Our example query image.Figure 14: Our example query image.

In this photo we can clearly see a blue sky at the top of the image and a sandy beach at the bottom. Using a global histogram we would be unable to determine where in the image the “blue” occurs and where the “brown” sand occurs. Instead, we would just know that there exists some percentage of blue and some percentage of brown.

To remedy this problem, we can compute color histograms in regions of the image:

Figure 15: Example of dividing our image into 5 different segments.Figure 15: Example of dividing our image into 5 different segments.

For our image descriptor, we are going to divide our image into five different regions: (1) the top-left corner, (2) the top-right corner, (3) the bottom-right corner, (4) the bottom-left corner, and finally (5) the center of the image.

By utilizing these regions we’ll be able to mimic a crude form of localization, being able to represent our above beach image as having shades of blue sky in the top-left and top-right corners, brown sand in the bottom-left and bottom-right corners, and then a combination of blue sky and brown sand in the center region.

That all said, here is the code to create our region-based color descriptor:

# divide the image into four rectangles/segments (top-left, # top-right, bottom-right, bottom-left) segments = [(0, cX, 0, cY), (cX, w, 0, cY), (cX, w, cY, h), (0, cX, cY, h)] # construct an elliptical mask representing the center of the # image (axesX, axesY) = (int(w * 0.75) // 2, int(h * 0.75) // 2) ellipMask = np.zeros(image.shape[:2], dtype = “uint8”) cv2.ellipse(ellipMask, (cX, cY), (axesX, axesY), 0, 0, 360, 255, -1) # loop over the segments for (startX, endX, startY, endY) in segments: # construct a mask for each corner of the image, subtracting # the elliptical center from it cornerMask = np.zeros(image.shape[:2], dtype = “uint8”) cv2.rectangle(cornerMask, (startX, startY), (endX, endY), 255, -1) cornerMask = cv2.subtract(cornerMask, ellipMask) # extract a color histogram from the image, then update the # feature vector hist = self.histogram(image, cornerMask) features.extend(hist) # extract a color histogram from the elliptical region and # update the feature vector hist = self.histogram(image, ellipMask) features.extend(hist) # return the feature vector return features

Lines 23 and 24 start by defining the indexes of our top-left, top-right, bottom-right, and bottom-left regions, respectively.

From there, we’ll need to construct an ellipse to represent the center region of the image. We’ll do this by defining an ellipse radius that is 75% of the width and height of the image on Line 28.

We then initialize a blank image (filled with zeros to represent a black background) with the same dimensions of the image we want to describe on Line 29.

Finally, let’s draw the actual ellipse on Line 30 using the cv2.ellipse  function. This function requires eight different parameters:

  1. ellipMask : The image we want to draw the ellipse on. We’ll be using a concept of “masks” which I’ll discuss shortly.
  2. (cX, cY) : A 2-tuple representing the center (x, y)-coordinates of the image.
  3. (axesX, axesY) : A 2-tuple representing the length of the axes of the ellipse. In this case, the ellipse will stretch to be 75% of the width and height of the image  that we are describing.
  4.  0 : The rotation of the ellipse. In this case, no rotation is required so we supply a value of 0 degrees.
  5.  0 : The starting angle of the ellipse.
  6. 360 : The ending angle of the ellipse. Looking at the previous parameter, this indicates that we’ll be drawing an ellipse from 0 to 360 degrees (a full “circle”).
  7. 255 : The color of the ellipse. The value of 255 indicates “white”, meaning that our ellipse will be drawn white on a black background.
  8. -1 : The border size of the ellipse. Supplying a positive integer r will draw a border of size r pixels. Supplying a negative value for r will make the ellipse “filled in”.

We then allocate memory for each corner mask on Line 36, draw a white rectangle representing the corner of the image on Line 37, and then subtract the center ellipse from the rectangle on Line 38.

If we were to animate this process of looping over the corner segments it would look something like this:

Figure 16: Constructing masks for each region of the image we want to extract features from.Figure 16: Constructing masks for each region of the image we want to extract features from.

As this animation shows, we examining each of the corner segments individually, removing the center of the ellipse from the rectangle at each iteration.

So you may be wondering, “Aren’t we supposed to be extracting color histograms from our image? Why are doing all this ‘masking’ business?”

Great question.

The reason is because we need the mask to instruct the OpenCV histogram function where to extract the color histogram from.

Remember, our goal is to describe each of these segments individually. The most efficient way of representing each of these segments is to use a mask. Only (x, y)-coordinates in the image that has a corresponding (x, y) location in the mask with a white (255) pixel value will be included in the histogram calculation. If the pixel value for an (x, y)-coordinate in the mask has a value of black (0), it will be ignored.

To reiterate this concept of only including pixels in the histogram with a corresponding mask value of white, take a look at the following animation:

Figure 17: Applying the masked regions to the image. Notice how only the pixels in the left image are shown if they have a corresponding white mask value on the right.Figure 17: Applying the masked regions to the image. Notice how only the pixels in the left image are shown if they have a corresponding white mask value in the image on the right.

As you can see, only pixels in the masked region of the image will be included in the histogram calculation.

Makes sense now, right?

So now for each of our segments we make a call to the histogram  method on Line 42, extract the color histogram by using the image  we want to extract features from as the first argument and the mask  representing the region we want to describe as the second argument.

The histogram  method then returns a color histogram representing the current region, which we append to our features  list.

Lines 47 and 48 extract a color histogram for the center (ellipse) region and updates the features  list a well.

Finally, Line 51 returns our feature vector to the calling function.

Now, let’s quickly look at our actual histogram  method:

def histogram(self, image, mask): # extract a 3D color histogram from the masked region of the # image, using the supplied number of bins per channel hist = cv2.calcHist([image], [0, 1, 2], mask, self.bins, [0, 180, 0, 256, 0, 256]) # normalize the histogram if we are using OpenCV 2.4 if imutils.is_cv2(): hist = cv2.normalize(hist).flatten() # otherwise handle for OpenCV 3+ else: hist = cv2.normalize(hist, hist).flatten() # return the histogram return hist

Our histogram  method requires two arguments: the first is the image  that we want to describe and the second is the mask  that represents the region of the image we want to describe.

Calculating the histogram of the masked region of the image is handled on Lines 56 and 57 by making a call to cv2.calcHist  using the supplied number of bins  from our constructor.

Our color histogram is normalized on Line 61 or 65 (depending on OpenCV version) to obtain scale invariance. This means that if we computed a color histogram for two identical images, except that one was 50% larger than the other, our color histograms would be (nearly) identical. It is very important that you normalize your color histograms so each histogram is represented by the relative percentage counts for a particular bin and not the integer counts for each bin. Again, performing this normalization will ensure that images with similar content but dramatically different dimensions will still be “similar” once we apply our similarity function.

Finally, the normalized, 3D HSV color histogram is returned to the calling function on Line 68.

Now that we have our image descriptor defined, we can move on to Step 2, and extract  features (i.e. color histograms) from each image in our dataset. The process of extracting features and storing them on persistent storage is commonly called “indexing”.

Let’s go ahead and dive into some code to index our vacation photo dataset. Open up a new file, name it index.py  and let’s get indexing:

# import the necessary packages from pyimagesearch.colordescriptor import ColorDescriptor import argparse import glob import cv2 # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument(“-d”, “–dataset”, required = True, help = “Path to the directory that contains the images to be indexed”) ap.add_argument(“-i”, “–index”, required = True, help = “Path to where the computed index will be stored”) args = vars(ap.parse_args()) # initialize the color descriptor cd = ColorDescriptor((8, 12, 3))

We’ll start by importing the packages we’ll need. You’ll remember the ColorDescriptor  class from Step 1 — I decided to place it in the pyimagesearch  module for organizational purposes.

We’ll also need argparse  for parsing command line arguments, glob  from grabbing the file paths to our images, and cv2  for OpenCV bindings.

Parsing our command line arguments is handled on Lines 8-13. We’ll need two switches, –dataset , which is the path to our vacation photos directory, and –index  which is the output CSV file containing the image filename and the features associated with each image.

Finally, we initialize our ColorDescriptor  on Line 16 using 8 Hue bins, 12 Saturation bins, and 3 Value bins.

Now that everything is initialized, we can extract features from our dataset:

# open the output index file for writing output = open(args[“index”], “w”) # use glob to grab the image paths and loop over them for imagePath in glob.glob(args[“dataset”] + “/*.png”): # extract the image ID (i.e. the unique filename) from the image # path and load the image itself imageID = imagePath[imagePath.rfind(“/”) + 1:] image = cv2.imread(imagePath) # describe the image features = cd.describe(image) # write the features to file features = [str(f) for f in features] output.write(“%s,%sn” % (imageID, “,”.join(features))) # close the index file output.close()

Let’s open our output file for writing on Line 19, then loop over all the images in our dataset on Line 22.

For each of the images we’ll extract an imageID , which is simply the filename of the image. For this example search engine, we’ll assume that all filenames are unique, but we could just as easily generate a UUID for each image. We’ll then load the image off disk on Line 26.

Now that the image is loaded, let’s go ahead and apply our image descriptor and extract features from the image on Line 29. The describe  method of our ColorDescriptor  returns a list of floating point values used to represent and quantify our image.

This list of numbers, or feature vector contains representations for each of the 5 image regions we described in Step 1. Each section is represented by a histogram with 8 x 12 x 3 = 288 entries. Given 5 entries, our overall feature vector is 5 x 288 = 1440 dimensionality. Thus each image is quantified and represented using 1,440 numbers.

Lines 32 and 33 simply write the filename of the image and its associated feature vector to file.

To index our vacation photo dataset, open up a shell and issue the following command:

$ python index.py –dataset dataset –index index.csv

This script shouldn’t take longer than a few seconds to run. After it is finished you will have a new file, index.csv .

Open this file using your favorite text editor and take a look inside.

You’ll see that for each row in the .csv file, the first entry is the filename, followed by a list of numbers. These numbers are your feature vectors and are used to represent and quantify the image.

Running a wc  on the index, we can see that we have successfully indexed our dataset of 805 images:

$ wc -l index.csv 805 index.csv

Now that we’ve extracted features from our dataset, we need a method to compare these features for similarity. That’s where Step 3 comes in — we are now ready to create a class that will define the actual similarity metric between two images.

Create a new file, name it searcher.py  and let’s make some magic happen:

# import the necessary packages import numpy as np import csv class Searcher: def __init__(self, indexPath): # store our index path self.indexPath = indexPath def search(self, queryFeatures, limit = 10): # initialize our dictionary of results results = {}

We’ll go ahead and import NumPy for numerical processing and csv  for convenience to make parsing our index.csv  file easier.

From there let’s define our Searcher  class on Line 5. The constructor for our Searcher  will only require a single argument, indexPath  which is the path to where our index.csv  file resides on disk.

To actually perform a search, we’ll be making a call to the search  method on Line 10. This method will take two parameters, the queryFeatures  extracted from the query image (i.e. the image we’ll be submitting to our CBIR system and asking for similar images to), and limit  which is the maximum number of results to return.

Finally, we initialize our results  dictionary on Line 12. A dictionary is a good data-type in this situation as it will allow us to use the (unique) imageID  for a given image as the key and the similarity to the query as the value.

Okay, so pay attention here. This is where the magic happens:

# open the index file for reading with open(self.indexPath) as f: # initialize the CSV reader reader = csv.reader(f) # loop over the rows in the index for row in reader: # parse out the image ID and features, then compute the # chi-squared distance between the features in our index # and our query features features = [float(x) for x in row[1:]] d = self.chi2_distance(features, queryFeatures) # now that we have the distance between the two feature # vectors, we can udpate the results dictionary — the # key is the current image ID in the index and the # value is the distance we just computed, representing # how ‘similar’ the image in the index is to our query results[row[0]] = d # close the reader f.close() # sort our results, so that the smaller distances (i.e. the # more relevant images are at the front of the list) results = sorted([(v, k) for (k, v) in results.items()]) # return our (limited) results return results[:limit]

We open up our index.csv  file on Line 15, grab a handle to our CSV reader on Line 17, and then start looping over each row of the index.csv  file on Line 20.

For each row, we extract the color histograms associated with the indexed image and then compare it to the query image features using the chi2_distance  (Line 25), which I’ll define in a second.

Our results  dictionary is updated on Line 32 using the unique image filename as the key and the similarity of the query image to the indexed image as the value.

Lastly, all we have to do is sort the results dictionary according to the similarity value in ascending order.

Images that have a chi-squared similarity of 0 will be deemed to be identical to each other. As the chi-squared similarity value increases, the images are considered to be less similar to each other.

Speaking of chi-squared similarity, let’s go ahead and define that function:

def chi2_distance(self, histA, histB, eps = 1e-10): # compute the chi-squared distance d = 0.5 * np.sum([((a – b) ** 2) / (a + b + eps) for (a, b) in zip(histA, histB)]) # return the chi-squared distance return d

Our chi2_distance  function requires two arguments, which are the two histograms we want to compare for similarity. An optional eps  value is used to prevent division-by-zero errors.

The function gets its name from the Pearson’s chi-squared test statistic which is used to compare discrete probability distributions.

Since we are comparing color histograms, which are by definition probability distributions, the chi-squared function is an excellent choice.

In general, the difference between large bins vs. small bins is less important and should be weighted as such — and this is exactly what the chi-squared distance function does.

Are you still with us? We’re getting there, I promise. The last step is actually the easiest and is simply a driver that glues all the pieces together.

Would you believe it if I told you that performing the actual search is the easiest part? In reality, it’s just a driver that imports all of the packages that we have defined earlier and uses them in conjunction with each other to build a full-fledged Content-Based Image Retrieval System.

So open up one last file, name it search.py , and we’ll bring this example home:

# import the necessary packages from pyimagesearch.colordescriptor import ColorDescriptor from pyimagesearch.searcher import Searcher import argparse import cv2 # construct the argument parser and parse the arguments ap = argparse.ArgumentParser() ap.add_argument(“-i”, “–index”, required = True, help = “Path to where the computed index will be stored”) ap.add_argument(“-q”, “–query”, required = True, help = “Path to the query image”) ap.add_argument(“-r”, “–result-path”, required = True, help = “Path to the result path”) args = vars(ap.parse_args()) # initialize the image descriptor cd = ColorDescriptor((8, 12, 3))

The first thing we’ll do is import our necessary packages. We’ll import our ColorDescriptor  from Step 1 so that we can extract features from the query image. And we’ll also import our Searcher  that we defined in Step 3 so that we can perform the actual search.

The argparse  and cv2  packages round out our imports.

We then parse command line arguments on Lines 8-15. We’ll need an –index , which is the path to where our index.csv  file resides.

We’ll also need a –query , which is the path to our query image. This image will be compared to each image in our index. The goal will be to find images in the index that are similar to our query image.

Think of it this way — when you go to Google and type in the term “Python OpenCV tutorials”, you would expect to find search results that contain information relevant to learning Python and OpenCV.

Similarly, if we are building an image search engine for our vacation photos and we submit an image of a sailboat on a blue ocean and white puffy clouds, we would expect to get similar ocean view images back from our image search engine.

We’ll then ask for a –result-path , which is the path to our vacation photos dataset. We require this switch because we’ll need to display the actual result images to the user.

Finally, we initialize our image descriptor on Line 18 using the exact same parameters as we did in the indexing step. If our intention is to compare images for similarity (which it is), it wouldn’t make sense to change the number of bins in our color histograms from indexing to search.

Simply put: use the exact same number of bins for your color histogram during Step 4 as you did in Step 3.

This will ensure that your images are described in a consistent manner and are thus comparable.

Okay, time to perform the actual search:

# load the query image and describe it query = cv2.imread(args[“query”]) features = cd.describe(query) # perform the search searcher = Searcher(args[“index”]) results = searcher.search(features) # display the query cv2.imshow(“Query”, query) # loop over the results for (score, resultID) in results: # load the result image and display it result = cv2.imread(args[“result_path”] + “/” + resultID) cv2.imshow(“Result”, result) cv2.waitKey(0)

We load our query image off disk on Line 21 and extract features from it on Line 22.

The search is then performed on Lines 25 and 26 using the features extracted from the query image, returning our list of ranked results .

From here, all we need to do is display the results to the user.

We display the query image to the user on Line 29. And then we loop over our search results on Lines 32-36 and display them to the screen.

After all this work I’m sure you’re ready to see this system in action, aren’t you?

Well keep reading — this is where all our hard work pays off.

Open up your terminal, navigate to the directory where your code lives, and issue the following command:

$ python search.py –index index.csv –query queries/108100.png –result-path dataset Figure 18: Search our vacation image dataset for pictures of the pyramids and Egypt.Figure 18: Search our vacation image dataset for pictures of the pyramids and Egypt.

The first image you’ll see is our query image of the Egyptian pyramids. Our goal is to find similar images in our dataset. As you can see, we have clearly found the other photos of the dataset from when we visited the pyramids.

We also spent some time visiting other areas of Egypt. Let’s try another query image:

$ python search.py –index index.csv –query queries/115100.png –result-path dataset Figure 18: The results of our image search engine for other areas of Egypt. Notice how the blue sky consistently appears in the search results.Figure 19: The results of our image search engine for other areas of Egypt. Notice how the blue sky consistently appears in the search results.

Be sure to pay close attention to our query image image. Notice how the sky is a brilliant shade of blue in the upper regions of the image. And notice how we have brown and tan desert and buildings at the bottom and center of the image.

And sure enough, in our results the images returned to us have blue sky in the upper regions and tan/brown desert and structures at the bottom.

The reason for this is because of our region-based color histogram descriptor that we detailed earlier in this post. By utilizing this image descriptor we have been able to perform a crude form of localization, providing our color histogram with hints as to “where” the pixel intensities occurred in the image.

Next up on our vacation we stopped at the beach. Execute the following command to search for beach photos:

$ python search.py –index index.csv –query queries/103300.png –result-path dataset Figure 20: Using our Content-Based Image Retrieval System built using OpenCV to find images of the beach in our dataset.Figure 20: Using our Content-Based Image Retrieval System built using OpenCV to find images of the beach in our dataset.

Notice how the first three results are from the exact same location on the trip to the beach. And the rest of the result images contain shades of blue.

Of course, no trip to the beach is complete without scubadiving:

$ python search.py –index index.csv –query queries/103100.png –result-path dataset Figure 21: Once again, our image search engine is able to return relevant results. Thus time, of an underwater adventure.Figure 21: Once again, our image search engine is able to return relevant results. Thus time, of an underwater adventure.

The results from this search are particularly impressive. The top 5 results are of the same fish — and all but one of the top 10 results are from the underwater excursion.

Finally, after a long day, it’s time to watch the sunset:

$ python search.py –index index.csv –query queries/127502.png –result-path dataset Figure 22: Our OpenCV image search engine is able to find the images of the sunset in our vacation photo dataset.Figure 22: Our OpenCV image search engine is able to find the images of the sunset in our vacation photo dataset.

These search results are also quite good — all of the images returned are of the sunset at dusk.

So there you have it! Your first image search engine.

In this blog post we explored how to build an image search engine to make our vacation photos search-able.

We utilized a color histogram to characterize the color distribution of our photos. Then, we indexed our dataset using our color descriptor, extracting color histograms from each of the images in the dataset.

To compare images we utilized the chi-squared distance, a popular choice when comparing discrete probability distributions.

From there, we implemented the necessary logic to accept a query image and then return relevant results.

So what are the next steps?

Well as you can see, the only way to interact with our image search engine is via the command line — that’s not very attractive.

In the next post we’ll explore how to wrap our image search engine in a Python web-framework to make it easier and sexier to use.



Source

Towards a Semantic Search Engine for Open Source Software

  • 1.

    Survey Analysis: Open-Source Software Adoption and Governance-Worldwide-2014, February 2015. https://www.gartner.com/doc/2984418/survey-analysis-opensource-software-adoption

  • 2.

    2015 Future of Open Source Survey Results – Black Duck Software. http://fr.slideshare.net/blackducksoftware/2015-future-of-open-source-survey-results

  • 3.

    Widespread Use of Open-Source Software Demands Strong and Effective Governance, August 2014. https://www.gartner.com/doc/2822619/widespread-use-opensource-software-demands

  • 4.

    Inoue, K., Yokomori, R., Yamamoto, T., Matsushita, M., Kusumoto, S.: Ranking significance of software components based on use relations. IEEE Trans. Softw. Eng. 31(3), 213–225 (2005)CrossRefGoogle Scholar

  • 5.

    Gysin, F.S.: Improved social trustability of code search results. In: 32nd ACM/IEEE International Conference on Software Engineering, Cape Town, South Africa, pp. 513–514. ACM Press (2010)Google Scholar

  • 6.

    Krugle OpenSearch. http://opensearch.krugle.org

  • 7.

    BlackDuck Open HUB. https://www.openhub.net

  • 8.

    Reiss, S.P.: Semantics-based code search. In: 31st ACM/IEEE International Conference on Software Engineering, Vancouver, Canada, pp. 243–253. IEEE Computer Society (2009)Google Scholar

  • 9.

    Merobase Source Code Search. http://www.merobase.com

  • 10.

    Hummel, O., Janjic, W., Atkinson, W.: Code conjurer: pulling reusable software out of thin air. IEEE Softw. 25(5), 45–52 (2008)CrossRefGoogle Scholar

  • 11.

    Linstead, E., Bajracharya, S., Ngo, T., Rigor, P., Lopes, C., Baldi, P.: Sourcerer: mining and searching internet-scale software repositories. Data Min. Knowl. Disc. 18(2), 300–336 (2009)MathSciNetCrossRefGoogle Scholar

  • 12.

    McMillan, C., Grechanik, M., Poshyvanyk, D., Fu, C., Xie, Q.: Exemplar: a source code search engine for finding highly relevant applications. IEEE Trans. Softw. Eng. 38(5), 1069–1087 (2012)CrossRefGoogle Scholar

  • 13.

    Open source software. http://opensource.ankerl.com

  • 14.

    FLOSS and FOSS. http://www.gnu.org/philosophy/floss-and-foss.en.html

  • 15.

    Gruber, T.: Ontology. In: Liu, L., Tamer Özsu, M. (eds.) Encyclopedia of Database Systems, pp. 1963–1965. Springer, Heidelberg (2009)Google Scholar

  • 16.

    Guarino, N., Welty, C.: Evaluating ontological decisions with OntoClean. Commun. ACM 45(2), 61–65 (2002)CrossRefGoogle Scholar

  • 17.

    Sure, Y., Studer, R.: On-to-knowledge methodology. In: Staab, S., Studer, R. (eds.) Handbook on Ontologies. Springer, Heidelberg (2003)Google Scholar

  • 18.

    Gomez-Perez, A., Fernandez-Lopez, M., Corcho, O.: Ontological Engineering with Examples from the Areas of Knowledge Management, e-Commerce and the Semantic Web. Springer, London (2004)Google Scholar

  • 19.

    Bachimont, B., Isaac, A., Troncy, R.: Semantic commitment for designing ontologies: a proposal. In: Gómez-Pérez, A., Benjamins, V.R. (eds.) EKAW 2002. LNCS (LNAI), vol. 2473, pp. 114–121. Springer, Heidelberg (2002)CrossRefGoogle Scholar

  • 20.

    Licences Libres. https://aful.org/ressources/licences-libres

  • 21.

    Various Licenses and Comments about Them. http://www.gnu.org/licenses/license-list.en.html

  • 22.

    Open Source Licenses Wars. http://www.shlomifish.org/philosophy/computers/open-source/foss-licences-wars/foss-licences-wars/index.html

  • 23.

    Beltaifa, R.: Une infrastructure pour la rutilisation de composants logiciels. Ph.D. thesis, National School of Computer Sciences, Tunisia (2004)Google Scholar

  • 24.

    Spinellis, D., Gousios, G., Karakoidas, V., Louridas, P., Admas, P.J., Samoladas, I., Stamelos, I.: Evaluating the quality of open source software. Electron. Notes Theor. Comput. Sci. 223, 5–28 (2009)CrossRefGoogle Scholar

  • 25.

    Bollmann, P.: The normalized recall and related measures. In: 6th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Maryland, USA, pp. 122–128. ACM Press (1983)Google Scholar

  • 26.

    Dyer, R., Nguyen, H.A., Rajan, H., Nguyen, T.N.: Boa: a language and infrastructure for analyzing ultra-large-scale software repositories. In: 35th International Conference on Software Engineering, San Francisco, CA, USA, pp. 422–431. IEEE Press (2013)Google Scholar

  • Source

    The 6 Best Free and Open Source SEO Software Solutions

    Unnamed File 32 - The 6 Best Free And Open Source Seo Software Solutions - Android

    Search Engine Optimization (SEO) is vital to businesses today. It is a fundamental process to generate customers. Marketers today rely more on ‘SEO Software Suites’ to stay ahead of their competitors.

    The trend of AI-based enterprise SEO technology is in. From Siri to Alexa to Google Home, everything is facilitated excellently by the tech-driven solutions and tools.

    Earlier, SEO was merely related to sharing links to get wider reach. But the automated software programs have given enhanced pace and opportunities to the marketers to explore new horizons of search engine marketing. Google Search Console has brought a new revolution in the SEO industry.

    This article will allow you to understand more about the SEO Software systems and learn about some of the powerful free and open source SEO software solutions that can bring the best out of all businesses.

    SEO Software – A Quick View!

    SEO involves optimizing a website’s content to match the search intent of the users. Your website’s indexing by the search engines depicts how useful the contents are for the users, and the retention rate confirms how long they are sticking to your site. Search Engine Marketing and Social Media Marketing are supportive pillars to the basic SEO activities that help in the overall development of a web project and to find the measures of analysis and analyze its progress.

    An easy to download or easily installable software is always appreciated by the users, and it keeps upgrading time-to-time. The reporting procedure that comes with is exceptional.

    Other than backlink tracking, keyword analysis, and trend analysis, the SEO software program identifies the best possible strategies to improve search relevance of your website.

    Many tools are available online that offer a broad industry analysis and competitor data to the webmasters. The SEO software tools are the backbone of the product and marketing teams in the organizations. They save time and efforts of the SEO experts by identifying the key aspects that could help to generate higher rankings on search engines.

    Some software solutions also offer paid search or search advertising facility and optimization tools. Search advertising tools are used to analyze conversion metrics, pay-per-click (PPC) advertising and ad placement. However, some of the free SEO software programs also yield the desired results.

    Important Constituents of an SEO Software Program

    Any SEO software solution should cover the following features:

    1. Universal Search Analysis
    2. Keyword Rank Tracking
    3. Keyword Opportunity Research
    4. Content Optimization Recommendations
    5. Site Crawling Functionality
    6. Page Reporting and Recommendations
    7. Backlink Analysis
    8. Analytics Integration
    9. Technical SEO Crawling and Recommendations
    10. Social Media Metric Tracking

    An Efficient and Effective SEO Software

    SEO software is rated based on its efficiency. It should be at the least able to perform the following functions;

    • Analyze the content of a website by providing data & suggestions for the improvements within SERPs.
    • Produce historical reports based on the optimization-related metrics.
    • Focus on improving the free and organic search engine listings.

    Key Benefits of using the SEO Software

    Google, Bing, and AOL are among the leading search engines that rule the entire search market. The contents published on different internet-based channels go through a proper evaluation process called indexing. When the search engines approve them, they get ranked based on certain factors like; user-oriented, readable, link quality, image, video and audio quality, authenticity and recent market trends.

    Search Engine Rankings matter to the businesses a lot. Higher the ranking a business website possesses, higher possibility of trade it could retain. The web visitors will more preferably check it, and so, its chances of getting business will improve.

    As it is a basic rule of nature that what ranks first will be the best – the customers start checking out the number of search engine page results displayed between 1 and 10 and choose the best suitable business, opportunity, product or services for them. The research process continues in the descending order and the customers select the preferred web solution according to their needs and budget.

    A good list of SEO software solutions is also made available to the customers utilizing the best keywords and search results that could benefit them to upgrade the level of their web projects and lead the market competition.

    Let us check what the key benefits of leading SEO Software solutions are.

    1. The software optimizes your website to perform best on the SERPs.
    2. It researches best key-phrases for targeting in the search marketing efforts.
    3. It stays updated with all changes in search engine algorithms.
    4. It provides comprehensive reports on website ranking and performance.
    5. It researches the competitor’s keywords and strategies.

    Recent Trends in the SEO Software Market

    As per Statista research, search engine marketing expenditure in the US alone is projected to reach around $80 billion at an annual rate by the year 2020. The study depicts that more than 60% of the brands manage their SEO campaigns by a third-party software solution. The enterprise SEO technology market has become a big business today. Check out the pie-chart given below to know about what technology most users are using to manage their SEO projects.

    Local SEO Software Market

    Local SEO Software Market is also expected to grow with a CAGR of +14.83% during the period 2019-2024. This type of software is generally used to promote products or services of a company to their potential local customers. This software tool also allows the users to identify, organize and analyze keywords along with advancing their search engine rankings. Such software programs are accurate, scalable and faster. They give you deep insight into the keyword density system, real-time keyword ranking, and ROI presentation.

    The local SEO market segmentation covers two categories for which the relevant type of software deployment can proceed further:

    Market Segmentation by Type: Cloud-based and On-premises software deployment.

    Market Segmentation by Applications: Small and Medium-sized Enterprises (SMEs) and Large Enterprises.

    Why Marketers Prefer Free and Open Source SEO Software?

    Many small to medium-sized businesses prefer using open source SEO software programs. Most of them are free software solutions and provide ease-of-use facility to the users. They have communities of hundreds and thousands of programmers constantly engaged towards contributing towards enhancing the software. The full codebase of those open source websites is made available to the users for editing thereby offering total flexibility.

    Marketers prefer open source SEO software solutions due to the following reasons:

    1. All the open-source platforms remain up-to-date with the ever-changing requirements and algorithms of Google in a better and faster way compared to a proprietary platform. The cost incurred to match the most patches/updates released for addressing new requirements of Google is minimal or almost free.
    2. Open source SEO software programs are more portable than others. They are easy to set up and give acceptable results. The free version of the software provides enough traction for evaluating the app.
    3. The open source SEO programs are fully editable. No part of the code or database of these software programs are signed under any commercial licence as in the case with the proprietary solutions.
    4. Majority of open source software programs come with multi-lingual support. So, users need not worry about language restrictions. These software programs come with a flexible plan, and their settings can be adjusted easily.

    Now, let us check some of the powerful free and open source SEO software programs that will be helpful for you to move further with your SEO campaigns and benefit you for the long term.

    A thorough comparison report is also given below to help you compare which software will work best for your SEO requirements.

    Now, let us check all these software programs in detail.

    1. SEOPanel

    Award-winning open source SEO software – SEO Panel provides an excellent tool kit to manage search engine optimization of your websites. It is a free software available as per GNU GENERAL PUBLIC LICENSE V2. The software was initially released on January 2010, and it has become the Most Promising Open Source Project in 2011 at the Open Source Awards in 2011.

    Highlights:

    • SEOPanel accompanies the most reliable ‘old-school’ concept of automatic directory submission that is hardly adopted anymore by most SEO companies.
    • The most significant feature of SEOPanel software is its SEO plug-ins. The plug-ins can be added to the SEO panel to extend the features as per your requirements.
    • It supports an astonishing keyword position checker, rank checker, sitemap generator, Meta tag generator, backlink checker and site auditor tool.
    • It supports 30 languages and ensures report sending feature.
    (Source: SEOPanel)

    2. Pipulate

    Pipulate is a fantastic, free and open source SEO software program that is designed to let users check and investigate data against URLs, keywords and anything else that is in trend. It is licensed as per MIT License and works on Mac, Windows or Linux desktop.

    Highlights:

    • Pipulate SEO tool investigates about position checks, site crawls, and API-hitting lookup-jobs directly into the Google Docs.
    • It supports scheduling analysis.
    • Additional to analyzing SEO and Social Media related factors, it also covers Python functions.
    • It facilitates awesome UI.
    (Source: Pipulate)

    3. SERPOSCOPE by Serphacker

    One of the leading free and open source SEO software – SERPOSCOPE excels in rank tracking for monitoring website’s ranking in Google. It runs on Windows, Mac OSX and Linux, licensed as per MIT licensing and helps in improving your website’s SEO performance.

    Highlights:

    • Serposcope software allows unlimited keywords and website tracking.
    • It facilitates competitor tracking, local and custom search and user account management.
    • It is designed to run on a desktop PC or server flexibly.
    • It provides proxy and captcha support.
    • It only requires Java and facilitates easy setup with one click installer.
    (Source: Serposcope)

    4. Matomo

    Matomo is a premium web analytics platform that fully respects the user’s privacy. It is a free and open source web analytics tool that comes with 100% data ownership. It is a reliable, secure and customizable tool that comes with GDPR compliance.

    Highlights:

    • Matomo claims 100% data accuracy.
    • It evaluates and performs enhanced SEO features.
    • It analyzes Heatmaps, Sessions Recordings, A/B Testing, Funnels, Goals, and Form Analytics.
    • It is currently used over 1.4 million websites across 190 countries.
    • It respects user-privacy and provides roll-up reporting.
    (Source: Matomo)

    5. OpenSEO

    OpenSEO is a free and open source tool to evaluate your website. Users call it a must-have Chrome Extension to be aware of the health of their web properties. It supports over 30 languages and 1 SEO extension for Chrome.

    Highlights:

    • It shows web rank and SEO stats of the current web page.
    • It gives quick access to Geo IP Location, backlinks, indexed pages, cached pages, socials, Whois, Alexa, etc.
    • It shows site security information that is rated by top security advisor like McAfee Site Advisor.
    • The latest version 9.6.0.0 is last updated on April 10, 2017.
    (Source: OpenSEO)

    6. SEER’s SEO Toolbox

    SEO Toolbox is an initiative by SEER Interactive that is a free and open source program. The software provides outstanding service and innovation across SEO, PPC, and Analytics. The company was established in 2002 by Wil Reynolds, and it is focussed to turn to the web to solve their problems.

    Highlights:

    • The Seer’s SEO Toolbox claims that it is a one-of-the-best tool for marketers and not programmers.
    • It is a spreadsheet-based tool where mash-ups can be easily created.
    • All the people working on similar tools can function as a team on the same doc.
    • It facilitates faster analysis for visits, bounce rates, and goal conversions.
    • It calculates metrics based on backlink analysis.
    (Source: Seer’s SEO Toolbox)

    We have included one more software for discussion under the SEO software solutions category-SEMrush. SEMrush is a popular software leading the market today. It is a promising solution for the SEO companies and entrepreneurs in the digital marketing arena. Let us check it in detail to have more information in this regard.

    SEMrush

    It is an all-in-one marketing toolkit that is offered to digital marketing professionals. It is developed for the enterprises and e-commerce businesses that aim to boost the online progress of their online-store. The SEMrush software is used by more than 4 million users worldwide as ‘Software-as-a-Service’ digital marketing and analysis tool.

    (Source: SEMrush)

    Features:

    • SEMrush is a cloud-based, SaaS, and web-based software that easily operates on Windows, Mac, Android, and iPhone/iPad platforms.
    • It is the best-to-use software solution for all small, medium, and large businesses and offers monthly & annual subscription facility.
    • It facilitates the best multi-domain support across the board.
    • Customized real-time report generation facility.
    • It conducts a deep link and backlink analysis with phenomenal SERP ranking tracking.

    MOZ Pro, Ahrefs, SE Ranking, SpyFu, Serpstat, Mangools, Advance Web Ranking, and MOZ Local, etc, are some other popular SEO Software solutions available in the market. If you are already using one from the software listed above, you can freely share your reviews here.

    Conclusion:

    Experts state, “SEO is as unpredictable as a hurricane.”

    So, when your strategy and planning seems falling behind, SEO Software takes up the charge.

    The future of SEO lies in rapid analysis and quick results for improvement on SERPs. 47% of the visitors do not wait longer than 2 seconds for a page to load, but they can only if the exceptional quality and informative content are presented to retain their interest. However, the bounce rate will increase with the increase in loading time of the web page.

    Natural Language Processing (NLP), Artificial Intelligence (AI), Machine Learning (ML) and User Experience (UX) are becoming the major transforming factors for new era’s Search Engine Optimization approach. Both visual and voice searches are expected to rise by 50% by the year 2020.

    So the bottom line is, “A lucrative solution it is to utilize SEO software. It makes you quickly turn your audience with all the positive efforts & practices.”

    Source

    Welcome to the Crafty ComputerChess program web page!


    Crafty Chess

    Crafty 25.2 is the current stable release.  This release includes a small bug fix that increases playing strength.

    Crafty
    Crafty is a free, open-source computer chess program developed by Dr. Robert M. (Bob) Hyatt.  Crafty is constantly being improved by a small team of contributors, including Dr. Hyatt. 

    Downloads
    Many versions of Crafty can be downloaded here.  The version numbers are of the form major.minor.bugfix where there are very few “bugfix” versions.  The most recent is always the highest major version, and within that major version, the highest minor version.  Note that xx.2 is lower than xx.12.  These are source-only distributions.  They come with a Makefile that will generally work under windows (Makefile.nt) and most Unix systems (Makefile) although either will likely need some editing to properly choose the correct hardware options.

    There are several directories here that contain useful information:

    You can find most old Crafty source versions here.  Note that there are NO executable files distributed on this web site, although many kind Crafty users make executables available for at least Windows, Linux and MacOS machines.

    You can find the source code for a version of Cray Blitz that dates back to approximately 1989 here.  Note that this is Fortran 77 source code although it can be compiled with the GNU fortran compiler easily.

    You can find documentation here.  While Crafty has a pretty complete “help” command, this gives an ASCII, postscript and troff/nroff versions of the documentation, along with a text file describing how to operate crafty in a “manual tournament” such as the WCCC events.

    You can find the opening book files here.  “book.bin” is the main book file which can be re-created by using the file “book.pgn” (after it is uncompressed) as input.  You can also create a bookc.bin (suggested opening lines when playing against a computer) and books.bin (suggested opening lines when Crafty doesn’t know (or doesn’t think) it is playing against a computer.  They are not mandatory.

    Usage
    Crafty can be compiled and executed from a terminal window on a Macintosh, Windows or Linux computer.  Crafty can also be run with a GUI interface such as Winboard (Windows) or Xboard (Linux).

    GUI
    Valters Baumanis has written his own Crafty Chess interface that be downloaded here.

    Configuration
    Crafty can be configured for a stronger game by setting some optional parameters (examples below).  These parameters can be put into the crafty.rc file, or used as start-up parameters from the terminal.  A more thorough explanations of Crafty’s parameters can be found here.

    ponder on (Allows Crafty to think on your time)
    hash=256m (Increases Crafty’s position hash to 256MB)
    hashp=64m (Increases Crafty’s pawn hash to 64MB)
    egtb (Tells Crafty to use syzygy Endgame Tables)
    cache=32m (Increases Crafty’s Endgame Table Cache to 32MB)
    swindle on (Allows Crafty to try to win drawn games (according to Endgame Tables))
    mt=4 (Increases Crafty’s MaxThreads to 4 for a quad core computer)

    This is a personal page managed by Mr. Tracy Riegle, Annville Pennsylvania, USA.

    I can be contacted via e-mail at CraftyChessPage@gmail.com.  The subject line MUST contain the word ‘Crafty’ or my spam filter will trash it.  Please allow a day or three for a response.

    This page was last updated on October 29th 2016.

    Hits since December 25th, 2015.

    Hit Counter
    Hit Counter

    Source

    Open Information Security Foundation | Community Driven, Open Source


    Team

    The Open Information Security Foundation (OISF) is dedicated to preserving the integrity of open source security technologies and the communities that keep them thriving. Our team and our community includes world-class security and non-profit experts, programmers, and industry leaders dedicated to open source security technologies.
    MEET THE TEAM

    Suricata

    Suricata logoSuricata is a free and open source, mature, fast, and robust network threat detection engine capable of real time intrusion detection (IDS), inline intrusion prevention (IPS), network security monitoring (NSM) and offline packet capture (pcap) processing. Suricata’s fast-paced community-driven development focuses on security, usability, and efficiency.
    LEARN MORE

    Contact

    OISF maintains an unwavering commitment to open source communities and security technologies. As demand for security technologies grows, OISF welcomes team or projects that are looking for a home. OISF provides business infrastructure necessary for open source projects to thrive. 
    What’s Next?

    Source

    OSINT Open Source Intelligence tools resources methods techniques

    edited 8dec17 AR

    Directories are not really search engines, but mainly systematically arranged listings of links selected by humans instead of a computer program. Most are by now outdated and no longer maintained. They do serve a function in OSINT though.

    edited 4aug18 AR

    Introduction: A few directories more or less suited for general purpose.
    o – Aviva DirectoryCategorised, with search engine. Launched 2005. Selective, human edited. Organised by topic, then by country.
    o – Best of the Web
    o – Galaxy.einetOne of the oldest directories on the Net, EiNET lists 2.000.000 links in about 700.000 categories. Links are annotated.
    o – The WWW virtual LibraryGeneva, Switserland. Very small, but could be usefull.
    Introduction: The following are Dutch language only.
    o – Start NederlandDutch links in a very compact layout.
    o – StartpaginaDutch language systematic arrangement of all kinds of subjects and ‘daughterpages’. Popular.
    o – DMOZ ArchiveArchive of the DMOZ Open Directory Project. Closed since 2016.

    [searchers] [premium] [reference] [subject]

    edited 1may09 ar

    For prospective searching, to find information that cites the information in the search argument, to find information that is newer then what we have. Also consider the vertical citation indexes like CLAIMS and Derwernt Patents Citations Index (both Dialog) for patent citations.
    Also try the link: operator available in Exalead. Used to be available in Google and Yahoo but has since some time now been turned off.
    o – Academic SearchMicrosoft.
    o – CiteSeerX betaSuccesor of CiteSeer. “Citations made by indexed documents”. Searches about 767.000 docs. Emphasis on mathematics and computer sciences.
    o – Google ScholarLimited, simple, but useable.
    o – Arts & Humanities Citation IndexISI Institute of Scientific Information. Dialog file 439.
    o – ISI Web of Knowledge
    o – Science Citation IndexISI Institute of Scientific Information. Dialog file 34, 434.
    o – ScopusElsevier B.V. Much larger then the ISI producsts.
    o – Social Science Citation IndexISI Institute of Scientific Information. Dialog file 7.
    o – Web of ScienceThomson.

    [searchers] [premium] [reference] [subject]

    edited 9may13 ar

    Also called hidden web, or dark web, or unvisible web which is not true. Many of the below promises things they don’t do, or simply don’t function at all. There is a special subcategory for the Onion network searchers, see ‘Tor searchers’.
    o – Complete PlanetBright Planet Corp. Listing of about 70.000 searchable databases and specialty search engines, not found by ‘normal’ search engines.
    o – Invisible Web(ooo, is being refurbished).
    Introduction: See category ‘tor search’ for onion directories and search engines
    o – Tor Browser BundleThe Onion router. Software package including browser (Firefox only) to surf the web almost anonymously, as well as diving into the .onion pseudo domain network. Obligatory for searchng .onion websites.
    o – GoshmeTo find search engines with exclusive content and features. Needs (free) registration, which can be slow.
    o – Turbo 10“Search the deep net”.

    [searchers] [premium] [reference] [subject]

    edited 12dec138 ar

    Also see Videocamera’s
    o – Shodan computer search engine“…search the Internet for computers. Find devices based on city, country, latitude/longitude, hostname, operating system and IP.”

    [searchers] [premium] [reference] [subject]

    edited 10apr08 AR

    The Net only covers a fraction of what’s out there. Conventional libraries are still important to find high quality information, although actual acquisition can be time consuming. Here are the worlds major libraries.
    o – Bibliotheque Nationale de France (FR)
    o – Bibliotheque Royale de Belgique (BE)
    o – British Library (UK)
    o – Deutsche Nationalbibliothek (DE)
    o – Koninklijke Bibliotheek (NL)Royal Library, national library of The Netherlands. The catalogues have an excellent search language. SL
    o – Library and Archives Canada (CA)Union of the former National Library and former National Archives of Canada. Extensive holdings, incl. film, music score’s maps, art, video. Simpel search engine with limited functionallities.
    o – Library of Congress (US)
    o – Russia St. Petersburg (RU)
    o – Catalogus paginaDutch language site listing hundreds of online catalogues worldwide.
    o – LibWeb“Library servers via WWW”, updated daily, lists over 7500 pages from libaries in ove 135 countries.

    [searchers] [premium] [reference] [subject]

    o – Bibliografia Nazionale Italiana (IT)
    o – Bibliographie Nationale de France (FR)Online, French only. RSS service. systematic per month, no search option..
    o – British National Bibliography (UK)Uses Dewey Decimal (DDC) and LC headings. Also forthcoming publications. The weekly is at The Week’s new BNB records in downloadble PDF’s.
    o – Deutsche Nationalbibliographie (DE)
    o – National bibliography (BE)
    o – National bibliography online (NL)Online database, good search functionality, English and Dutch. Last year only. Includes forthcoming publications
    o – Out now : Netherlands bibliography weeklist Postponed from April 2015 onwards. Weekly list of the Dutch national bibliography. Published weekly, in Eng and Dutch. A-list: new books, first issues and special issues of journals and annuals, as well as electronic publications, B-list: publications issued by governemental authorities (national and local), scientific institutes and similar organizations, privately published dissertations and other university-papers. C-list: other new publications.

    [searchers] [premium] [reference] [subject]

    Introduction: Holdings of some of the most important libraries on the world. Use the Interlibrary loan systems to use.
    o – BN Opale-Plus (FR)
    o – Biblioteca Apostolica Vaticana
    o – Bibliotheque Royale Belgique (BE)
    o – British Library Main CatalogueHolds 57 million items. Excludes the BNB
    o – CCFR (FR)
    o – Koninklijke Bibliotheek (NL)
    o – Library and Archives Canada (CA)
    o – Library of Congress (US)
    o – Russia St. Petersburg (RU)
    o – OAIster”OAIster is a union catalog of millions of records representing open access resources that was built by harvesting from open access collections worldwide using (OAI-PMH). OAIster includes more than 25 million records representing digital resources from more than 1,100 contributors. OAIster records are fully accessible through WorldCat.org, and will be included in WorldCat.org search results along with records from thousands of libraries worldwide.”
    o – WorldCatConnects collections of more than 10.000 libraries worldwide with access to over 130 million items. Free acount to make lists. Good search functionality. Also internet resources, DVDs, etc. Successor to RedLightGreen.
    o – National Library Catalogues Worldwide
    o – The European Library

    [searchers] [premium] [reference] [subject]

    edited 24dec10 AR

    Listing links to bookshops and sources for full text books.
    o – Amazon
    o – Amerigo
    o – Barnes Noble
    o – Bibliofind
    o – Noord Nederlandsche Boekhandel
    o – A9Amazon’s book search engine.
    o – Google BooksMainly reviews, also lots of full text books.
    o – Google Ebookstore About 3m titles. Intended for mobile reading. Titles in uers’ personal library are stored in the cloud. Partly fee-based, account required.
    o – Million Book CollectionThe Universal Digital Library.
    o – Open Content Alliance“…is a collaborative effort of a group of cultural, technology, nonprofit, and governmental organizations from around the world that helps build a permanent archive of multilingual digitized text and multimedia material. An archive of contributed material is available on the Internet Archive website and through Yahoo! and other search engines and sites.”
    o – Open LibraryAbout 22m books of which about 1m with full text. With clustering and frequency table tools. Beta.
    o – Project GutenbergAccess to 28000 free full text books online. About another 100.000 are availble at the project’s partners and affiliates.

    [searchers] [premium] [reference] [subject]

    edited 30jun16 ar

    Introduction: Traditionally, there are three major general purpose vendors:
    o – FactivaU.K. Mainly aimed at business news. Many news sources for business analyses, companies, markets etc. Great in creating custom made end-products for the end-user, good search language and coverage, although the automated process of assigning keywords often fails. Use the Factiva database catalogue to get an impression on the coverage.
    o – Lexis-NexisUSA. Largest aggregator worldwide. Billions of documents online. Fair search language and fair support. Tends to be a bit fluffy. Good coverage of Dutch newspapers. Comes in many flavours: LN Academic, LN Diligence, etc. Acquired Moreoever in 2014.
    The database catalogue may be used to look up which sources are available. Here is the search language help files.

    o – Proquest DialogUS. Aimed at scientific and technical information, also business and some news. Excellent selection of databases, very good search language and meta data design, very good support and documentation.
    Direct links to the database documentation: Proquest Dialog Customer Resources with links to the search guides, quick reference cards and the Proquest Dialog Database catalogue ; ProQuest Dialog ProSheets detailed descriptions of each database.

    Introduction: Below is a selection of the large – more or less – general purpose commercial information providers.
    o – BloombergBusiness and markets. Financial information services, news and desktop integration. Expensive, but the most authorative.
    o – Open Source CenterTop class information source for international relations and security. Formerly the FBIS, now OSC. Needs (free) registration,
    Introduction: Also see the category Government
    o – OpMaatSdu. Dutch language legal database offering databases, journals, jurisprudence. Dutch only.
    o – PiCartaBy OCLC. Access to the Dutch central union catalogue and Online Contents. Fee based
    o – IHS Jane’s
    o – USNI Periscope

    [searchers] [premium] [reference] [subject]

    edited 27mar18 ar

    Also thesauri, taxonomies, ontologies
    o – Dewey Decimal Classification (DDC) summariesBrief history of the DDC sytem and a listing of the classes up to the third summary. Maintained by OCLC
    o – Universal Decimal Classification (UDC)UDC Consortium. International. Multilingual. World’s widely used system to organize documents systematically by subject. Fee based. A free summary of the main classes is available.
    o – Schema voor de Indeling van de Systematische catalogus in Openbare bibliotheken (SISO)Dutch only. Maintained by NBDbiblion.

    [searchers] [premium] [reference] [subject]

    o – Impala Belgium
    o – PicartaThe Netherlands.
    o – United Kingdom

    [searchers] [premium] [reference] [subject]

    edited 9dec09 ar

    Moved to ‘Terminology‘.

    [searchers] [premium] [reference] [subject]

    edited 27jul10 ar

    Only the general encyclopedias will be listed here. There are however thousands more, typically specialised in some subject. Please note that many general encyclopedia hardly have a online presence but are very important anyway, like Encyclopedia Americana, La Grand Larousse, Deutsche Encyclopedia.
    o – Encyclopedia BritannicaProbably the worlds most famous scientific encyclopedia.
    o – WikipediAMost popular one, but its reliability is doubtfull, although the enormous peer group suggest the contrary.
    o – Encyclopedia.comSearches the Columbia Encyclopedia and some others, as well as dictionaries.

    [searchers] [premium] [reference] [subject]

    o – Bartleby Reference
    o – Information Please
    o – Internet Public Library Ready Reference
    o – Research-It!
    o – Yahoo reference

    [searchers] [premium] [reference] [subject]

    edited 6aug2018 AR

    A well prepared free text search strategy starts with finding the terminology. What terms best describe the concept at hand. This category lists some tools that may coome in handy.
    Introduction: Dictionaries can be very helpful in preparation of a search. They should be used in accordance to the Semantic Table to create proper keywords and paying attention to such things as synonyms, homonyms, spelling variations, translations, history, etc.
    o – IATE”Interactive Terminology for Europe”. Succesor to Eurodicotom. European Terminology database. Very usefull for bureaucratic terminology in all languages of the EU.
    o – Merriam-WebsterAmerican dictionary, thesaurus, medical. Returns definitions, examples of use, some relational terms.
    o – MetaGlossaryIn Beta. For looking up definitions of terms. Returns related terms and definitions with sources. Handy also to find synonyms or other related terms.
    o – Oxford English DictionaryOxford University Press. English to English dictionary.
    o – The Free Dictionary“…English, Medical, Legal, Financial, and Computer Dictionaries, Thesaurus, Acronyms, Idioms, Encyclopedia, a Literature Reference Library.”
    o – Dictionnaire de L’Academie Francaise9th edtion, in French.
    o – Le petit LarousseDictionnaires de francais et encyclopedie.
    o – DudenDictionary with synonyms,
    o – Interglot translation dictionaryIn English, based in The Netherlands. Independent online translation dictionary. Autocomplete and spellcheck. English, Spanish, Dutch, german, French, Swedish.
    o – WordReference.comTranslations, descriptions and synonyms for French, German, Italian, Spanish, dutch, Swedish, Catalan, Swedish, Russian, Portugese, English, and more
    o – Algemeen Nederlands WoordenboekIn Dutch, online dictionary of contemporary dutch of The Netherlands and Flaunders, with examples sentences, general grammatical properties, spelling and pronounciation.
    o – MWB : MijnwoordenboekIn Dutch, translates to and from English, Dutch, French, German and Spanish. Annotated, multidisciplinary (also legal terms + explanations, technical terms etc.). Gives excellent explanations of terminology depending on discipline or science.
    o – Straattaal betekenis weten?List of street terminology spoken in larger cities in The Netherlands. In Dutch.
    o – Straatwoordenboek.nlDutch street slang. Holds about 10.000 terms and more than 3100 definitions.
    o – Thesaurus PolitiekundeZoekinstrument voor het terugvindbaar maken van kennis en publicaties, onderhouden door het Kennis en Informatieknooppunt van de Politieacademie. Bijzonder handig voor het vinden van alternatieve zoektermen. Geeft per term relaties weer met andere termen. Met directe links naar de catalogi van het mediacentrum.
    o – Woordenboek der Nederlandsche TaalInstituut voor de Nederlandse taal. Covers 1500 to 1976
    o – WoordenlijstIn Dutch. Also known as ”Het Groene Boekje”, official spelling list of Dutch words.
    o – Acronyms FinderAcronym Finder is the world’s largest and most comprehensive dictionary of acronyms, abbreviations, and initialisms. More than 5 million acronyms and abbreviations. Categorized.
    o – Slangit : the slang dictionnaryThe source for finding common used acronyms, emoticons, etc. Very useful if you want to search through Social Media.
    o – Synoniemen.netDefinitions of terms and alternatives. Links to other sysnonym sites. In Dutch
    o – The free thesaurusSynonym finder. Extensive, not always relevant though. The popup screens with word explanations are handy.
    o – Thesaurus.comGood for synonyms, broader and narrower terms. With example sentences. Also includes related terms for the related terms (…).
    o – WikibrainsOnline brainstorming tool. Enter a topic and see associated terms. Click on one of the bubbles to get more information on it in the sidebar.
    Introduction: Some general purpose online translate tools.
    o – Bab.la”Dictionary Vocabulary Translation”. Translation to/from the English from mostly European languages, but also Hindi, kapanese, Korean and Arabic. With useage examples and pronounciation. Includes synonyms.
    o – Bing Translator.Similar to Google Translate. You can choose between different views of the original text and the translated one. Very useful is the “side-by-side” view of translations.
    o – Google TranslateMany languages, not just translating text or words, but also translated search, i.e., searching Arab language websites using English language terms. You can alos tranlsate complete webpages.
    o – Links online dictionariesExtensive list of links to dictionaries of about 150 languages.

    [searchers] [premium] [reference] [subject]

    This category holds links to sources that are worthwhile a further investigation, or, recently (re)discovered sources. These should be analysed, annotated and arranged somewhere else.
    o – Howards HomeDutch, and in Dutch.
    o – Kroll AssociatesRisc consulting company.
    o – Stratfor
    o – iJET Intelligent Risk Solutions
    o – BurellesLuce
    o – Cision (MediaMap))
    o – Business Monitor International (BMI)
    o – De KrantenbankDutch language. Full text access to the content of 6 major dutch language newspapers, back to approx. 1991. Restricted for use through Dutch public libraries.
    o – Keesings Historisch ArchiefDutch language historical site with journal articles on current affairs dating back about 60 years. About 110.000 articles, updated monthly with about 250 articles per month, describing the history of NL and BL.

    [searchers] [premium] [reference] [subject]

    edited 5apr13 AR

    o – International Air Transport Association (IATA)International trade association representing 290 airlines. Holds data on airfields, standards, codes, manuals, etc.
    o – Aircraft Charter WorldCommercial site, containing informatie about over 13.000 airports worldwide, their location, basic information and capacity.
    o – PilotFriendCollection of information for pilots: aviation weather, simple airport database, and general information on several aspects for flying.
    o – World Aeronautical DatabaseDatabase containing information on nearly 10,000 airports and over 11,000 navaids worldwide.
    o – FlightRadar24Live air traffic on a map or satellite with moving icons. International. Gives information on movement, flight information , position, altitude, route, and more. Also cockpit view.
    o – The Airport GuideThe-Airport-Guide.com is an online tool assisting operators of all types locate airport information to include handlers, maintenance or catering. Outdated. Not updated since nov 2010.

    [searchers] [premium] [reference] [subject]

    edited 15dec13 AR

    Selection of search tools to find information contained in weBlogs.
    o – BlogSearchEngine
    o – Meltwater IceRocketReal time search engine for Blogs.
    o – BlogDiggerNote the ‘groups’ option.
    o – GetBlogsSmall, categorized directory of blogs.
    o – Globe of BlogsDirectory of blogs, by author, topic, title, location, birtday.
    o – Blogsearch Google
    o – Faganfinder
    o – BlogPulseFor searching the blogosphere.
    o – Blogline
    o – TechnoratiMost popular search engine for Blogs.

    [searchers] [premium] [reference] [subject]

    edited 22jul2020 AR

    o – DART-Europe E-theses portalDART-Europe is a partnership of research libraries and library consortia who are working together to improve global access to European research theses. Holds 830.000 open access titles from 619 universities in 28 European Countries. Is the European Working Group of the Networked Digital Library of Theses and Dissertations (NDLTD).
    o – Networked Digital Library of Theses and DissertationsThe Networked Digital Library of Theses and Dissertations (NDLTD) is an international organization dedicated to promoting the adoption, creation, use, dissemination, and preservation of electronic theses and dissertations (ETDs). Has a database listing of about 70 countries with for each country the national repository. The Global ETD Search page allows international search in about 6 million titles.
    o – OAisterOpen Access Initiative to millions of open access resources. 50 million records. Uses WorldCat to search the OAister database.
    o – Open Access Theses and Dissertations (OATD)Holds about 5.2 million titles from 1100 universities, colleges and research institutions.
    o – OpenDOARGlobal Directory of Open Access Repositories. Listing per country of academic institutions. About 70 for the NL. Extensive search options.
    o – PQDT OpenProquest Dissertations Thesis. Open Access docs only. Full text. Free.
    o – Registry of Open Access Repositories (ROAR)Bibliography of catalogues. Choose any registry, repository type ‘e-Theses’, and then a countryname.
    o – WorldCatInternational. Union catalogue of more then 10.000 libraries. World’s largest library. In advanced search, choose ‘content’ then ‘thesis/dissertation’, or, in ‘format’ choose ‘thesis/dissertation’.
    o – ProQuest Dissertations & Theses GlobalGraduate dissertations and theses. About 5 million titles and 200.000 growth per annum. From 100 countries. Fee-based.
    o – Catalogus van academische geschriften in Nederland verschenenPrinted bibliography of dissertations. Covers 1952-1979.
    o – Bibliografie van Nederlandse proefschriften = Dutch thesesBibliography of Dutch dissertations. Covers 1980-1990
    o – Catalogus van academische geschriften in Nederland en Nederlandsch Indië verschenen Printed bibliography 1924-1940 of Dutch dissertations.
    o – CatalogusPlusUniversity of Amsterdam. In ‘material type’ choose ‘PhD thesis’. Supports basic Boolean
    o – NARCISNational Academic Research and Collaborations Information System. NARCIS is the main national portal for information about researchers and their work. Provides access to scientific information, including (open access) publications from the repositories of all the Dutch universities, KNAW, NWO and a number of research institutes, datasets from some data archives as well as descriptions of research projects, researchers and research institutes. Abut 2.2 million publications.
    o – UvA-DAREUniversity of Amsterdam: Digital Academic REpository. Contains articles, books, chapters, PhD dissertations, reports and inaugural lectures of UvA staff. about 175.000 records, many full-text.

    [searchers] [premium] [reference] [subject]

    edited 22sep17 AR

    Mainly government information on the Internet, including world leaders. Also consider country information
    Introduction: All the following are mostly in Dutch language only.
    o – Officiele bekendmakingenAll official parliamentary documentation from 1995-now: Staatsblad, Staatcourant, Handelingen, Tractatenblad, etc. Full text, searchable full text, by title, doc numbers, period, format and more. In Dutch. See Staten-Generaal digitaal for the period 1814-1995.
    o – Overheid.nlGeneral portal giving access to all information from Dutch Government and official Dutch government institutions. In Dutch.
    o – Overheid.nl : de wegwijzer naar informatie en diensten van alle overheden”Government organisations”. Dutch language addressbook listing government bodies, all government organisations, as well as counties, municipalities and semi government bodies. Very complete and up to date. Part of ‘Overheid.nl’.
    o – Parlement & PolitiekUniversity Leiden, Parliamentary Documentation Center. Information on dutch politics and players. Mainly devoted to the backgrouns of NL politics.
    o – Staten-Generaal Digitaal”Parlementaire documenten uit de periode 1814-1995”. Official Dutch parliamentary documentation from 1814-1995. Full text, searching by titlewords, full-text, doc numbers, dates. See Officiele bekendmakingen for the period 1995-now. In Dutch.
    o – Wet- en regelgevingIn Dutch. Access to Dutch laws, treaties, policy papers, decisions, regulations etc. Full-text. Including counties and city councels.

    edited 22jun12 AR

    Introduction: Access to common laws and regulations regarding the European Union.
    o – EUR-LexAccess to European Union Law, covering agreements, directives, regulations and decisions. With links to the EU budget, parliamentary questions, etc.
    o – European Commission Library and E-resourcesOnline library catalogue, full access. Also to full text online available documents
    o – European Council – Council of the European Union
    o – European UnionOfficial website of the European Union
    o – Official documentsLinks to Offical Journal, European Parliament, European Council, Council of the European Union, European Commission, European External Action Service, and all other main EU institutions.
    o – GOA Government Accountability Office
    o – USA.govSuccessor to FedWorld.gov. Very large single entry site to USA government.

    edited 23jan2014 gn

    o – Heads of state, heads of government, and ministers for foreign affairsUN Protocol and Liaison service. Ordered by country, lists the head of state, head of government and minister of foreign affairs. Regularly updated.
    o – Rulers“…contains lists of heads of state and heads of government (and, in certain cases, de facto leaders not occupying either of those formal positions) of all countries and territories, going back to about 1700 in most cases. Also included are the subdivisions of various countries […], as well as a selection of international organizations. Recent foreign ministers of all countries are listed separately.”. Also religious leaders, international organizations, and biographies with pictures.
    o – World LeadersSuccessor to Chiefs of State and Cabinet Members of Foreign Governments. CIA, U.S. Lists the names and functions of cabinets and other important government members of all countries in the world. Including representatives to the UN, ambassadors to the USA and the heads of central banks.
    o – World political leadersRoberto Ortiz de Zarate. International listings, all countries, of leaders from 1945 up. Lists former political parties, current political parties, royalty, prime ministers with dates and political party, some ministers and chairmen of chambers, chairman of parties.
    o – WorldLeaderTwitterDirectoryDetailed directory of world leaders from (inter)national governments and international organisations on Twitter. Mentioning twitter accounts, names, organisations, websites and more. Excellent work.

    edited 22oct17 AR

    o – Embassy worldListing of the world countries and per country its worldwide embassies.
    o – Governments on the WWWLast change 2002. Comprehensive database of governmental institutions on the World Wide Web: parliaments, ministries, offices, law courts, embassies, city councils, public broadcasting corporations, central banks, multi-governmental institutions etc. Includes also political parties. Online since June 1995. Contains more than 17000 entries from more than 220 countries and territories as of June 2002
    o – Political resources on the netPolitical resources per country: parties, governments, elections, history, and much more. With references to many (inter) national political websites.
    o – Worldwide Embassy Database
    o – Foreign governments on the WebNot there anymore.
    o – International documents

    [searchers] [premium] [reference] [subject]

    edited 22mar10 AR

    See: Shipping.

    [searchers] [premium] [reference] [subject]

    edited 03jan19 ar


    o – The Internet ArchiveThe Internet Archive is a 501(c)(3) non-profit that was founded to build an Internet library. The Internet Archive contains: texts, audio, moving images, and software as well as archived web pages in our collections and provides specialised services for adaptive reading and information access for the blind and other persons with disabilities.
    Introduction: To find out who owns a particular domain name, or website. Often more convenient to first go to IANA, then any of the regional internet registries. Please note: ‘whois’ is used to find data to a DNS name. ‘Lookup’ is used to find data to an IP address.
    o – DomainCrawlerSwedish. Started in 2006. Offers domain information including ccTLDs, focussing on SEO. Version 2.0 offers new functionality, such as thumbnails, backlink checks, pagerank history and the ability to monitor a domain for change.
    o – DomainSearch.comSearches generic toplevel domains as well as some CC-TLDs.
    o – DomainToolsFavourite. The whois service is for free, the remainder is fee-based.
    o – ICANN WhoisWHOIS database and lookup of ICANN. gTLD’s only. Also explanations on the history and technical side of Whois. Glossary.
    o – InterNICWhois service. Domain, Registrar and Nameserver.
    o – TransIP WHOIS lookupLeiden, The Netherlands.
    o – Who.ISWHOIS Search, Domain Name, Website, and IP Tools
    o – Whois.netLimited search for a selection of ccTLD’s and a few gTLD’s.
    o – WhoisRequestWhois lookup, Reverse IP, Reverse NS and Domain History.
    o – Visual traceroute Part of DNS tools. Plots the route and final destination on a map.

    edited 10nov13 ar

    Introduction: Tools to assist in network analyses, suites that offer multiple tools for whois, lookup, hop, reverse IP, IP, dig, ping, reverse dns etc.
    o – CentralOps.net”Advanced online Internet utilities”. Offers domain dossiers and checks, email dossier, ping, traceroute, nslookup, autowhois, tcpqueries and more.
    o – DNS Stuff4 areas: Domain tools, Network tools, Email tools and IP tools.
    o – Da WhoisSite Info, Who Is, Traceroute, RBL Check, Whoat”s my IP
    o – IP-AdressDoes IP tracing, e-mail trace (by analysing the email headers), IP Whois, Reverse IP, Speedtest, external IP and system.
    o – IPinfo Security PortalIP address, IP location and many more tools.
    o – ISC Internet Domain Survey“The Domain Survey attempts to discover every host on the Internet by doing a complete search of the Domain Name System.”
    o – Network toolsOverview of online web based network utilities.
    o – Network tools (Pinelands)Very good. Offers Ping, Traceroute, Whois, Lookup and many more.
    o – RobTex Swiss Army Knife Internet ToolDNS checks, WHOIS, and network analysis. Also graphical display of networks and relations.
    o – TCPUTILS“The ultimate online investigation tool”. Tracerout, whois, reverse IP, Geo lookup, ping, MAC lookup, DNS loookup, and more
    o – ViewDNSCollection of tools, ranging from Reverse IP over Traceroute to MAC Address Lookup.
    o – WebTic DNS scanPerforms a reverse DNS scan on a given domain name or IP addres, returning a list of IP addresses and domain names of the entire C-class. In Dutch.
    Introduction: Return a map with the geographic location of an IP addres.
    o – Free IP Address & Geolocation Lookup Tool
    o – Geo IP ToolReturns physical address information and a geogaphical map of the location of a domain name or IP address
    o – InfosniperReturns geo location on a map of a IP address. Also provider, hostname, tomezone state, latitude and longitude.
    o – RIPE Geoloc widgetReturns a map with an indication of the geolocation of the IP address.
    Introduction: Which websites are hosted on the same IP address? Reverse IP domain checking tools.
    o – Reverse IP domain checkGet a list of websites hosted on the same web server; port checker; find your external IP address; visual traceroute; Whois. Mostly referring to other, irrelevant (US only), commercial site such as Spokeo
    o – Internet Traffic Report“…The Internet Traffic Report monitors the flow of data around the world. It then displays a value between zero and 100. Higher values indicate faster and more reliable connections.”
    o – Netblocks : mapping Internet freedom in real timePublishes reports on partial or full Internet shutdowns worldwide.

    [searchers] [premium] [reference] [subject]

    o – Internet Society (ISOC)Home of the IETF and others. ISOC is the organisation that controls it all.
    o – IANA“The Internet Assigned Numbers Authority (IANA) has authority over all number spaces used in the Internet, including IP address space. IANA allocates public Internet address space to Regional Internet Registries (RIRs) according to their established needs.” IANA IPv4 Address Space Registry
    o – ICANNInternet Corporation for Assigned Names and Numbers. “ICANN is responsible for the global coordination of the Internet’s system of unique identifiers. These include domain names (like .org, .museum and country codes like .UK), as well as the addresses used in a variety of Internet protocols. ICANN was supposed to replace IANA.
    Introduction: There are five RIR’s worldwide, all serving the same purpose. Most RIR’s have the same goal: “…provides Internet resource allocations, registration services and co-ordination activities that support the operation of the Internet globally.”. In general, look for a link ‘Database’ for the WHOIS services. Search for Reports or Statistics to get the allocation tables.
    o – APNIC Asia Pacific Network Information Centre. See the allocation listing.
    o – ARINAmerican Registry for Internet Numbers. Covers Canada, Caribbean and North Atlantic islands, and the United States
    o – AfriNICAfrica Network Information Centre. See resources list for distribution.
    o – LACNIC
    o – RIPE NCCReseaux IP Europeens Network Coordination Centre. Serves Europe, the Middle East and parts of Asia, see the Service Region. See allocation table for allocations of addresses to countries. Operates the k.root-servers.net.
    Introduction: The explanations by Karrenberg are excellent, aimed at the beginner, non-technician.
    o – DNS Root Name Servers Explained For Non-Expertsby Daniel Karrenberg, ISOC, 2007
    o – DNS Root Name Servers Frequently Asked Questionsby Daniel Karrenberg, ISOC, 2008
    o – Full list of ccTLD and gTLD
    o – Overview generic top level domainsICANN
    o – The Internet Domain Name System Explained for Non-Expertsby Daniel Karrenberg, ISOC, 2004

    [searchers] [premium] [reference] [subject]

    edited 20jun11 AR

    Some useful image search sites, as well as two reverse image search engines.
    o – Google imagesMost usefull source.
    Introduction: The (currently) main single search engines all have image search, with almost spectecular differences in search results.
    o – Bing images
    o – FlickrYahoo. Photographs only. Geotagging, blogging, personal pages. Very large.
    o – InstagramOwned by Facebook. Mobile version only.
    o – Picasa Web AlbumGoogle. Photo sharing, making albums. Also free client Picasa 3 for downloading to organize your own photo’s..
    o – Picsearch3 Bn pictures
    o – Tiltomo“…uses proprietary algorithms (mathematical calculations) to analyze the similarity and relationship between images.”. Experimental. Based on Flickr. Also see TinEye
    o – Yahoo images
    o – Google ImagesReverse image search. Drag a picture in the search bar to start a search. Does not lead to any useful results though. Yet.
    o – TinEye reverse image search”TinEye is a reverse image search engine. You can submit an image to TinEye to find out where it came from, how it is being used, if modified versions of the image exist, or to find higher resolution versions. TinEye [uses] image identification technology rather than keywords, metadata or watermarks.” Also see Tiltomo
    o – BestPicturesOfProduced by Webcellence (Jerusalem, Israel). Uses Bing, Google and Flickr to search for images, photos, illustrations and caricatures. This site is equivalent to Zuula picture search.
    o – How to find images on the InternetDirectory of links to image search sites, also maps. Covering general purposes sites, press, museums, commercial sources, art, science, medical images, etc.
    o – Ditto

    [searchers] [premium] [reference] [subject]

    [searchers] [premium] [reference] [subject]

    edited 2apr09 ar

    Free and fee-based journals.
    o – African Journals Online (AJOL)“…over 340 peer-reviewed journals from 25 African countries. These journals cover the full range of academic disciplines with strong sections on health, education, agriculture, science and technology, the environment, and arts and culture.
    o – Directory of Open Access Journals“…covers free, full text, quality controlled scientific and scholarly journals”. About 4100 journals and 270000 articles. Good subject tree. Search articles or journals, on subject, title or author.
    o – Dutch list
    o – Electronic Journal Title Index
    o – Elektronische Zeitschriftenbibliothek“…ist ein kooperativer Service von 503 Bibliotheken mit dem Ziel, ihren Nutzern einen einfachen und komfortablen Zugang zu elektronisch erscheinenden wissenschaftlichen Zeitschriften zu bieten. Aufgenommen werden alle Zeitschriften, die Artikel im Volltext anbieten.”
    o – HighWire PressStanford University. 71 of the 200 most-frequently-cited journals publishing in science. […] full-text life science articles in the world, with 1,892,462 articles available without subscription. Also fee based articles. Good search possibilities, alerts, Good for international relations.
    o – John Labovitz list
    o – LivRe!Brazil. Portal to free access journals on the Internet, about 3600 journals. Mainly beta sciences.
    o – Newslink
    o – Open J-Gate“…indexes articles from 4547 academic, research and industry journals.[…] 2580 of them are peer-reviewed scholarly journals.”. Links to 1m articles, growth about 300k per year
    o – Magazines for LibrariesLaGuardia, Cheryl, ed., with Bill and Linda Sternberg Katz. 17th ed. New York: Bowker, 2003.
    “…An annotated listing by subject of over 6,000 periodicals. Each entry gives name of periodical, beginning publication date, publisher, editor, address, price and such information as indexing, size, and level of audience. Short abstracts describe the scope, political slant, and other aspects of the publication. Arrangement is topical, bringing magazines and journals on like subjects together. To find an individual title, use the title index at the end of the volume.”

    [searchers] [premium] [reference] [subject]

    o – EthnologueInternational. Describes for each country in the world the languages spoken.

    [searchers] [premium] [reference] [subject]

    edited 11oct2012 AR

    o – Eulex
    o – Europol
    o – Interpol
    o – Drug Enforcement Agency DEA
    o – Federal Bureau of Investigation FBI
    o – Eulex
    o – Law Library of Congress Serbia
    o – OSCE Mission to Serbia : law enforcement
    o – World Legal Information InstituteCatalog and search facilities for over 500 databases from 55 countries including case-law, legislation, treaties, law reform reports, law journals, and other materials
    o – Police magazine
    o – Alarmeringen P2000Real-time notices for police, amnbulance and firebrigade. Per county and city.
    o – P2000 meldingenReal-time notices plus date and time, service and descriptons.
    o – PolitiebronnenRechercheren in open bronnen. In Dutch.
    o – Politiescanner.netNotices for/by police with service, date/time, region, target and message. With urgency annotation.
    o – CopNET.org
    o – CopSeekPolice and Law Enforcement search engine and directory
    o – Crime spider“…best crime and law enforcement sites and categorized topics. … on criminalistics, forensic anthropology, FBI, unsolved murders, homicide investigation techniques, child abuse, domestic violence, the death penalty, terrorism, criminal justice, law and courts, behavioral profiling, gang violence, juvenile crime, missing persons, serial killers or mass murderers, criminals, police, crime scene photos, …”
    o – Officer.com Directory of agencieslinks to law enforcement agency websites international, covering the USA and all other countries (see International Agencies).
    o – Refdesk.comFacts encyclopedia Crime and law enforcement.

    [searchers] [premium] [reference] [subject]

    edited 25apr17 AR

    General and special purpose maps. Also consider country information categories, as well as transport.
    o – CIA MapsThree types of maps for most countries: Administrative maps contain information about national boundaries, capitals and administrative divisions. Physiographic maps identify bodies of water, deserts, mountains, historic sites, archaeological sites, elevation points and plains Transportation maps show international boundaries, expressways, roads, railroads, canals and major airports.
    o – Perry Castaneda library map colectionLarge, regularly updated, many simple country and city maps, sometimes surprisingly detailed. Maps of current interest on top, with references to maps on other sites. Very useful.
    o – ReliefWeb maps and updatesGreat map service, often very detailed and up to date maps of crises areas.
    o – United Nations Cartographic SectionOffers General Maps by region or country. You can also choose to view UN Mission Maps. Maps are in PDF.
    o – United Nations Map LibraryThe Map Collection houses over 80,000 maps, some 3,000 atlases, gazetteers, travel guides, reference works and digital products.
    o – Bing map NLAerial view and birs eye view.
    o – FundaBroker of rental and purchase houses. For each house maps, pictures, and 3D impressions.
    o – Google map NLWith streetview, satellite view. Fairly detailed satellite images.
    o – Kadaster PDOKKadaster ‘Publieke Dienstverlening op de Kaart’, in Dutch, maps and geodatasets of The Netherlands, downloadable, free.
    o – Standard time zones of the worldWorld map wit indications of time zones in UTC.
    o – Time Zone Abbreviations – Worldwide ListAlphabetical listing of the world time zone abbreviations with full name, location and UTC offset.
    o – EastView GeoSpatialTopographic maps, geological maps, nautical maps, imagery. International, global coverage. All infor originating from the former USSR. Very large. Fee based
    o – MapQuestDoes not hold as many features as Google Maps. Offers nevertheless a nice alternative. Check the new feature “Travel Blogs”.
    o – Maps.comCommercial provider of mapping products
    o – OpenStreetMap”OpenStreetMap is a free editable map of the whole world. OpenStreetMap allows you to view, edit and use geographical data in a collaborative way from anywhere on Earth. ”
    o – Travel Journals
    o – WikiMapiaWhich is as suggested a combination between a Wiki and satellite imagery. With some moderator functionality and links to unblurred maps.
    o – World Map Collections“…are a cooperative project of several public and private universities of Florida and the Florida Department of Environmental Protection to make digitized modern and antique maps available on the Web. The Florida and Caribbean collections are particularly strong, but Africa, the Americas, and the Middle East are also represented.”

    edited 25apr13 gn

    Introduction: Combinations of (mostly) maps and some other data combined into ‘new’ information.
    o – Directory of MashupsColdbeans. Mainly Twitter mashups.
    o – MapmakerpediaGoogle Wiki site about crowdsourced maps.
    o – Mashups DirectoryListing of the latest mashups
    o – The World’s News on Google MapsNice example of a useful mashup
    o – Twitter trendsPlotting trending topics on Google maps.
    Introduction: Consider travel shops, outdoor shops, that typically have a book shop as well with excellent map departments.
    o – Geografische boekhandel Jacob van Wijngaarden
    o – David Rumsey Map CollectionThe historical map collection has over 30,000 maps and images online. The collection focuses on rare 18th and 19th century North American and South American maps and other cartographic materials. Historic maps of the World, Europe, Asia, and Africa are also represented.
    o – Index MundiHistorical maps only, fairly limited in scope.
    o – Old Maps OnlineThis portal provides access to a collection of historical maps in libraries around the world. Choose an area of your interest and narrow by date.
    o – Bing mapsNot as detailed as Google’s, but has a very useful birdseye view.
    o – Google mapsWith route planner.
    o – ANWB routeplanner EuropaVery good planner, covering Europe, with route and maps. The route has references to the map. The layout is easier to use then the one of RouteNET.
    o – RAC Route plannerCovering Europe,
    o – RouteNETBeautiful planner, with good maps and very detailed descriptions. Excellent. The route has references to the map. Covers Europe. In Dutch.
    o – ViaMichelinCovering Europe, with pictures of the signposts to follow.
    o – the AA .comCovers the UK, Ireland and Europe, but you can’t get from Europe to the UK or vv.
    o – MapBlast!Now redirects to Bing Maps
    o – Roelf Oddens MapsNo longer available.

    [searchers] [premium] [reference] [subject]

    edited 17oct10 AR

    Health information related to individual trauma, country health information as well as disaster relief and early response.
    o – Country health profilesFor Africa
    o – Landelijk Coordinatiecentrum Reizigersadvies LCRIn Dutch. O.a. Landenlijst en nieuws met informatie over vaccinaties en antimalaria aanbevelingen
    o – MednarDeep Web Technologies. Federated, deep web search engine that returns results in real time from multiple sources. Also see the description with Biznar.
    o – ReliefWebNGO aggregator. Must use. “…the world’s leading on-line gateway to information (documents and maps) on humanitarian emergencies and disasters. An independent vehicle of information, designed specifically to assist the international humanitarian community in effective delivery of emergency assistance, it provides timely, reliable and relevant information as events unfold, while emphasizing the coverage of “forgotten emergencies” at the same time. ReliefWeb was launched in October 1996 and is administered by the UN Office for the Coordination of Humanitarian Affairs (OCHA).”
    o – Yahoo Travel Health and MedicineYahoo directory

    Medical/health
    News

    o – EMM MedISysAggregator for medical news, from the European Media Monitor.
    o – Centers for Disease Control and Prevention
    o – World Health OrganizationUnited Nations. “…responsible for providing leadership on global health matters, shaping the health research agenda, setting norms and standards, articulating evidence-based policy options, providing technical support to countries and monitoring and assessing health trends.”
    o – Global Disaster Alert and Coordination System“…provides near real-time alerts about natural disasters around the world and tools to facilitate response coordination, including media monitoring, map catalogues and Virtual On-Site Operations Coordination Centre.”
    o – Global Disaster Map RSOE EDISWorldwide alert map, by the Emergency and Disaster Information Service (EDIS), Budapest, Hungary.
    o – Global disaster watchMonitoring natural disasters: climate change, cyclones, drought, earthquakes, flooding, freak waves, hurricanes, landslides, meteor strikes, mystery booms/skyquakes, pandemics, record-breaking disasters, solar flares, space weather, tropical storms, tsunamis, volcanoes, unusual animal behavior, weather extremes, wildfires; disaster archives from 1998-present.
    o – National Center for Medical Intelligence (NCMI)Formerly ‘Armed Forces Medical Intelligence Center (AFMIC)’, based in Fort Detrick (MD). Now only available via Intelink-U, which is for .gov or .mil domains only. Others need an account.
    o – MedlineUS National Library of medicine. Made up of Index Medicus, Index to Dental Literature. Free. PubMed is basically Medline with a little extra.
    o – MedlinePlus
    o – PubMedUS National Library of medicine. The “free” and very usefull version of Medline. Free. The advanced version of the database offers most options.
    o – EmBaseElsevier. Exerpta Medica. 18 million biomedical bibliographic records from 1974 to present, with 500.000 additions per annum. Competitor to Medline.

    [searchers] [premium] [reference] [subject]

    Clever use of a search engine enable finding music quickly and easy. Try intitle:”index of” /mp3 OR /wma songtitle to find lists of songs from your favourite composer.
    Introduction: For downloading or streaming music.
    o – Clicksteris a MP3 search engine client looking for music in websites.
    o – LastSharpSoftTonic. Little tool that will download music in the background from www.last.fm. The latter one will stream music on genre or category, LastSharp will then store the music. Alternative for LastSharp is No23 recorder.

    [searchers] [premium] [reference] [subject]

    edited 1sep17 AR

    o – Agence France PresseMost prestigious wire agency, but one.
    o – BBC
    o – ReutersMost prestigious wire agency in the world.
    o – Agence France PresseMost prestigious wire agency, but one.
    o – The Press Trust of IndiaNational press agency of India.
    Introduction: The general press wire service of The NL is ANP, which is fee-based.
    o – NOS
    o – RNW InternationalRadio Netherlands International. World service.
    o – Teletekst
    o – Al-Jazeerah
    o – XINHUA
    o – InterfaxInterFax international information group. Covers Russia, China and Eurasia. Offers about 100 information services in weekly and daily intelligence reports. Fee-based.
    o – BBC
    o – ReutersMost prestigious wire agency in the world.
    o – Reuters Today
    o – APVia TBO. Top International news.
    o – Bloomberg NewsBusiness news service. Markets and developments. Large, reliable. Also many commercial options for local solutions: desktop research, news integration, trend analysis etc.
    o – CNNOwned by Warner Bross, famous for comedy productions.
    o – RFE/RL
    o – 1stHeadlines
    o – AlertNetReuters Foundation.
    o – Google
    o – IRINUN Office for the Coordination of Humanitarian Affairs. Humanitarian news and analysis
    o – MSN/NBC
    o – News now.co.ukIndexes thousands of news sites on the Internet. Focusses on news content, and updates live every few minutes. Also has sophisticated filtering technology which uses a combination of keywords and meta data to filter links to articles into hundreds of newsfeed topics.
    o – Yahoo! Newsworld news
    o – topix“…links news from 50,000 sources to 360,000 lively user-generated forums. Topix also works with the nation’s major media companies to grow and engage their online audiences through forums, classifieds, publishing platforms and RSS feeds.”
    o – Europe Media Monitor (EMM)News aggregator en clustering service, developed by the Joint Research Center. Offers a EMM NewsBrief with all the news clustered by frequency, themes, and a timeline, and a EMM NewsExplorer offering the news for all official languages, clustered, with reference to countries and people. Use EMM MedISys for medical news analyses. Free.
    o – SilobreakerSilobreaker Ltd, London UK/Stockholm. Internet news crawler and aggregator with many tools and functionalities to help analyse developments: keywords, authority files, meta data, frequency tables of events, persons, network analysis, biographies. Personalisation of pages, and much more. Fee based, consider using EMM. Very good and easy to use. Originally by Infosphere, Sweden.
    o – 10×10Useless, but fun, and a great idea. “10×10™ (‘ten by ten’) is an interactive exploration of the words and pictures that define the time. The result is an often moving, sometimes shocking, occasionally frivolous, but always fitting snapshot of our world. Every hour, 10×10 collects the 100 words and pictures that matter most on a global scale, and presents them as a single image, taken to encapsulate that moment in time.”

    [searchers] [premium] [reference] [subject]

    edited 28mar2020 AR

    o – Frankfurter Allgemeine
    o – Guardian
    o – Washington Post

    edited 07jan19 AR

    Introduction: The best Belgian newspapers, with a few aggregators and a link to Belgian magazines.
    o – De StandaardQuality Dutch Language Belgian newspaper
    o – De TijdMainly Financial news. Dutch language.
    o – Grenz EchoQuality German language Belgian Newspaper
    o – Guide PresseFrench directory that lists French newspapers and magazines.
    o – Krantenkoppen.beBelgian Headlines Aggregation site. Hyperlinked articles from 15 Belgian Newspapers (12 NL, 3 FR). Also includes foreign (non-Belgian) newspapers.
    o – Le SoirQuality French language Belgian newspaper
    o – MO*Belgian magazines. MO* focuses on third world countries. Available in 4 languages: Dutch, Spanish, English and French.
    o – Le Figaro
    o – Le Monde
    o – Frankfurter Allgemeine Zeitung
    o – Suddeutsche Zeitung
    Introduction: Also see The Press in Italy (BBC) for a critical overview of the Italian news.
    o – Corriere della Sera
    o – La Repubblica
    o – La Stampa

    edited 7may19 AR

    o – Algemeen Dagblad
    o – HoofdnieuwsIn Dutch. Main news headlines of several Dutch newspapers and wires, each in its own frame. Categorized in tabs. With filtering. Based on Rnews RSS Atom feed aggregator.
    o – NRC Handelsblad
    o – Telegraaf
    o – Volkskrant
    Introduction: Also see The press in Russia (BBC) for a critical review of the Russian press.
    o – Izvestia
    o – Kommersant
    o – Komsomolskaya Pravdarussia.
    o – El Pais
    o – Neue Zurcher Zeitungch.
    o – Financial Times
    o – Guardian
    o – The Independent
    o – The Times

    edited 13may19 ar

    o – Christian Science Monitor
    o – International New York TimesFormerly the International Herald Tribune. Changed October 2013.
    o – New York Times
    o – Washington Post
    o – Washington Times
    o – NL newspapersSearches the main NL newspapers only: Parool, AD, Trouw, NRC, VK, Telegraaf.

    edited 3okt19 AR

    Introduction: Coverage of the below search engines / directories differs greatly. Use about three to get a decent balance.
    o – ABYZ News LinksNewspapers and newsmedia guide. International.
    o – Foreign newspapers online
    o – NewsCentralJust 3500 newspapers, international, organised by continent and country.
    o – NewsWealth.com International news media directoryNewspaper directory for USA and international. Also tabloids, US magazines, US radio and live Internet TV channels.
    o – Newslink newspapersInternational, arranged by region or state. Also by category. Newspapers, magazines, radio/tv and resources. Coverage is less than Wombat, but reasonable.
    o – Online newspaper directory for the worldWeb Wombat Pty, Australia. International in scope, covers thousands digital newspapers as well as magazines. Browsing by country, region. Coverage looks good.
    o – World-newspapers.comEnglish language newspapers and radio stations only. Arranged by continent and country.

    [searchers] [premium] [reference] [subject]

    edited 02jun17 ar

    o – ICVAInternational Council of Voluntary Agencies. ICVA is a global network of non-governmental organisations.
    o – Idealist.orgIdealist aims at connecting people and organisations with a keen interest in voluntary work. Search for jobs, events, etc.
    o – UN DPI NGO dirUN Department of Public Information for NGO
    o – WANGOThe World Association of Non-Governmental Organizations (WANGO) is an international organization uniting NGOs worldwide.
    o – Worldwide NGO DirectoryProvided by WANGO. Browse by region (to see a list of NGOs working in that region) or Search using keywords, mission, area of focus, country, region, (down to adress level)

    [searchers] [premium] [reference] [subject]

    edited 6jan11 AR

    Most of these also publish full text reports on all kinds of subjects in the field of international relations. The ‘R’ links to full text reports, the ‘J’ links to journals/newsletters.
    o – International Crisis GroupInternational, main office in Brussels, BL. Excellent reports, full text. Monthly Crisis Watch. R
    o – International Security NetworkMust read. Very good selection of reports and essays, a fact database (FIRST) , links to other think tanks and lists of abbreviations / acronyms. R
    o – International Atomic Energy AgencyR.
    o – International Strategic Studies Association (ASSA)Washington DC. Publishes the Defense & Foreign Affairs Handbook.
    o – Belfer Center for Science and International Affairs
    o – Center for Contemporary ConflictUS Navy. R, Strategic Insights.
    o – Center for Defence and International Security Studies
    o – Center for Defense Information
    o – Center for Nonproliferation Studies Part of MIIS, Monterey CA, USA. J, P
    o – Center for Peace and Security Studies
    o – Center for Strategic and International Studies (US, Washington). R, J
    o – Council on Foreign Relations
    o – Council on Foreign Relations
    o – Instituut ClingendaelBased in The Hague, The Netherlands. In Dutch. J, R, R.
    o – International Institute for Strategic Studies (IISS)Excellent source of top quality reports. Based in London UK, published ao Military Balance, Strategic Survey, Survival, Armed Conflict Database. Publications available through Taylor & Francis.
    o – John M. Olin Institute for Strategic Studies
    o – Stockholm International Peace Research Institute (SIPRI)
    o – Amnesty InternationalR,
    o – Berliner Zentrum fur Internationale Friedenseinsatze
    o – CATO Institute
    o – Congressional Research Service
    o – DEMOS
    o – Energy Information Agency
    o – European Union
    o – GTZ Deutsche Gesellschaft fur Technische ZusammenarbeitGTZ supports the German Government in achieving its development-policy objectives for political, economic, ecological and social development in a globalised world.
    o – Human Rights Watch
    o – ICRC
    o – IMF
    o – IWPR
    o – InterPol
    o – Interkerkelijk Vredesberaad
    o – NATO
    o – National Intelligence Council
    o – Nederlands Instituut voor Oorlogsdocumentatie
    o – OHR
    o – OSCE
    o – Peace and Conflict
    o – ReliefWeb
    o – Silk Road Studies Program
    o – The Brookings Institution
    o – Transnational InstituteTNI is an international network of activist-scholars committed to critical analyses of the global problems of today and tomorrow.
    o – USAid
    o – United Nations
    o – United Nations Office on Drugs and CrimeUNODC is a global leader in the fight against illicit drugs and international crime. Important publication: World Drug Report. R
    o – United Nations UNHCR
    o – WEU
    o – Weatherhead Center for International Affairs (Harvard Univ.)
    o – World Bank
    o – World Trade Organisation
    o – Electronic Privacy Information Centre (EPIC)”EPIC is a public interest research center in Washington, D.C. It was established in 1994 to focus public attention on emerging civil liberties issues and to protect privacy, the First Amendment, and constitutional values. EPIC publishes […] the EPIC Alert [cut] reports and even books about privacy, open government, free speech, and other important topics related to civil liberties.”
    o – COOP
    o – Ken Davies list
    o – Marburg Universitaet
    o – Yahoo listing

    [searchers] [premium] [reference] [subject]

    edited 26jul09 AR

    Some links to sites that do ”OSINT”, whatever is meant by that. Mainly crap.
    o – Bangladesh Open Source Intelligence MonitorsBlog, hosted by BlogSpot.com
    o – Internet HaganahDaily news regarding Israeli affairs, project of ”The Society for Internet Research” (SoFIR, focused on combating global jihad).
    o – O P E N S O U R C E I N T E L L I G E N C EDaily news regarding Israeli affairs.
    o – OSINT Global
    o – OSINT web pageAbout OSINT, with many links to OSINT relevant sites.
    o – The OSINT Journal

    [searchers] [premium] [reference] [subject]

    edited 13dec10 AR

    Patents are basically for free, but searching for them can be difficult. For more advanced searching functionalities, a commercial provider is advised like Dialog and its World Patent Index.
    Introduction: A few general purpose sites for finding Patents
    o – Delphion Intellectual Property Network
    o – Google PatentsGoogles patent database.
    Introduction: A couple of tools are indispensible when doing patent research, such as a list of country codes and the IPC.
    o – Country codesTwo letter country code table according to WIPO.
    o – National patent databasesExtensive list of national patent databases.
    o – The International Patent Classification (IPC)The IPC is the main search tool for most patent databases.
    Introduction: Sources for finding international patents.
    o – European Patent Office (EPO)
    o – Patent service providers
    o – World Intellectual Property OrganizationAlso with links to all the member countries and their patent offices. Free access to all international patent applications through PatentScope
    o – Octrooicentrum Nederland Official Dutch patent office.
    o – US Patent and Trademark Office

    [searchers] [premium] [reference] [subject]

    edited 05mar2013 GN

    o – ElectionWorldEverything about Global Politics
    o – Political resources on the net
    o – DMOZ GovernmentsDirectory of Governmental websites.
    o – Inter-Parliamentary Union (IPU)The IPU is the international organisation of Parliaments, established in 1889.
    o – PARLINE Database of National ParliamentsPARLINE is the IPU’s database on national parliaments. The PARLINE database contains information on the structure and working methods of 266 parliamentary chambers in all of the 190 countries where a national legislature exists. Does not contain information about regional governments.
    o – Election Resources on the InternetThis site provides a collection of links about elections around the world. Data from some 21 countries and 4 autonomous regions is available for download in CSV format.
    o – Elections and electoral systems around the worldNo longer exists as such. Has been replaced by Political Science Resources (see annotation just above). Site still viewable in the UK Web Archive
    o – PARLINE Recent Elections PageElection reports include the background to the election and of course the election results. The results contain statistics on the distribution of votes and seats among political parties, as well as a breakdown of seats by sex and, when available, by the age and profession of members of parliament. A brief summary of the electoral system is also available for each parliamentary chamber. Historical archives of election results are available in PARLINE back to 1967. A full account of each year’s parliamentary elections is published in the Chronicle of parliamentary elections, and an annual overview is found in the Panorama of parliamentary elections.
    o – Political Science ResourcesPolitics and Government mainly in the UK and the USA. The “successor” to Political Science Resources.
    o – Political Transformation and the Electoral Process in Post-Communist EuropeNo longer updated since 2001! Still valuable for historical purposes.

    [searchers] [premium] [reference] [subject]

    o – Defense Daily Network
    o – DefenseLINK
    o – Semiconductor Online
    o – The Dismal Scientist
    o – VerticalNet

    [searchers] [premium] [reference] [subject]

    Introduction: An avator may be needed to hide your real identity.
    o – Fake Name GeneratorGenerates a full avatar with names, addresses, dates, passwords, websites, age, employer, tracking numbers, etc. Very complete for immediate use.
    o – Fake Name GeneratorGenerates fake names and bio’s of English, Danish, German, Spanish, Greek persons.
    o – Fake person generatorGenerates fake names plus extensive bio.
    o – ThispersondoesnotexistPictures of people generated by AI. Good for use in an avatar.

    [searchers] [premium] [reference] [subject]

    edited 22sep17 AR

    o – FlightAwareFlight tracker, live, visualised. International.
    o – Live air trafficFlight tracker, live, international, based on ADS-B transponders. Slow, sometimes unreliable, the news lags two months
    o – Planefinder.net
    o – Actuele spoorkaart NederlandTrain tracker for The Netherlands, in Dutch. Live tracking on map of trains, inluding delays. With links to such tracking map abroad.
    o – MarinetrafficLive, clickable map, tracks AIS vessel with ship information, numbering, route, pictures, destination. Detailed information. Port information with expected arrival, departure times. Lists of moored vessels with shipdata and photographs..
    o – ShipfinderLive, clickable map to track vessels returning routes, ship data. International.
    o – OVZoekerClickable, live map tracking bus, trams and trains in The Nehterlands. Returns informatie about the vehicle and the route with departure, arrival times etc.
    o – Travic : transit visualization clientUniversity of Freiburg. Clickable, live tracking of public transport in The Netherlands: bus, trams, trains. International, but not exhaustive.

    [searchers] [premium] [reference] [subject]

    o – Radio 1
    o – Radio 1 journaal
    o – Radio 2
    o – Radio 3
    o – BBC World Service
    o – Live radio on the internet
    o – PublicRadioFanInternational. Also covers PodCasts.
    o – Radio locatorLinks to 10.000 radio station web pages and over 2.500 audio strams. International.
    o – Radiostationworld : your global radio station directory
    o – Web RadioFormerly: Radio directory. BRS Media Inc. About 12.000 radio stations online.

    [searchers] [premium] [reference] [subject]

    o – Salafi Publications

    [searchers] [premium] [reference] [subject]

    edited 20apr14 AR

    Most are commercial, providing images with watermarks for free.
    o – EastView GeoSpatialTopographic maps, geological maps, nautical maps, imagery. International, global coverage. All infor originating from the former USSR. Very large. Fee based
    o – FlashEarthFlash-based, zoomable map of the world using satellite and aerial imagery from several mapping websites
    o – Google EarthComes as a client, with two commercial variations.
    o – MapMartMapMart offers a variety of high, medium and low resolution satellite imagery datasets for nearly every location on the earth.
    o – SICSatellite Imagery Corporation.
    o – TerraServer
    o – WikiMapiaWhich is as suggested a combination between a Wiki and satellite imagery. With some moderator functionality and links to unblurred maps.
    o – GlobeXplorerImage site.
    o – SpaceImagery

    [searchers] [premium] [reference] [subject]

    edited 23sep13 AR

    Listing guides to find global ports, local ports, vessel tracking etc.
    o – Port of Antwerp
    o – Port of Rotterdam
    o – MarineTraffic.comLive vessel tracker. Global. Indicates postion and essential vessel informatin on a map. Categorized per vesseltype. Ports listings with arrival and departure times. Based in AIS data.
    o – IndiaOneStop.ComPorts and Maritime Services of the world. International
    o – PortFocusPorts Harbours Marinas worldwide. ”Looking for marine transportation links, maritime services, yacht and boat supplies, shipping cruise and sea freight links, bunkerfuels and bunkering, information about a port, harbour or marina? Choose sovereign country (also for islands and dependencies, eg island of St Maartens falls under the Netherlands) then select the port.”
    o – Ports.comListing of about 3300 ports worldwide. Very short descriptions per port, but good to find out addresses of ports.
    o – World Port LinksLinks to port authorities worldwide
    o – Yahoo Port authoritiesOverview of directories

    edited 05jun14 GN

    o – International Maritime Organization (IMO) IMO is the United Nations specialized agency with responsibility for the safety and security of shipping and the prevention of marine pollution by ships.

    edited 05jun14 GN

    o – IMB Piracy Reporting CentreThe International Maritime Bureau’s (IMB) free 24-hour service to the seafarer.
    o – IMO Piracy ReportsMonthly reports on Piracy incidents.
    o – Live Piracy MapThis map shows all the piracy and armed robbery incidents reported to the IMB Piracy Reporting Centre.
    o – Piracy and Armed Robbery against shipsFrom the International Maritime Organization (IMO). Contains the UN definition on Piracy, some measures for protection and links to laws, regulations, etc.

    [searchers] [premium] [reference] [subject]

    edited 10jan13 ar

    o – Download.com
    o – Tucows xs4all
    Introduction: Offers lists of previous versions of software packages, apps and tools, in case the latest version is not compatible with…, something
    o – old AppsLists software for Linux, Mac, Windows, different versions.
    Introduction: Some tools useful for network security and penetration testing.
    WARNING: these tools may or may not be illegal in your country. Use at your own risc. I do not accept any responsibility for the use of these tools or any consequences thereof.
    o – Aircrack-ngIs an 802.11 WEP and WPA-PSK keys cracking program that can recover keys once enough data packets have been captured. It implements the standard FMS attack along with some optimizations like KoreK attacks, as well as the all-new PTW attack, thus making the attack much faster compared to other WEP cracking tools.
    o – Blue’s port scannerVery fast port scanner. German.
    o – Cain & AbelBy OXid.it. Recovery of various kind of passwords by sniffing the network, cracking encrypted passwords using Dictionary, Brute-Force and Cryptanalysis attacks, recording VoIP conversations, decoding scrambled passwords, recovering wireless network keys, revealing password boxes, uncovering cached passwords and analyzing routing protocols. For XP.
    o – Distributed password recoveryTo recover password from a variet of documetns
    o – Nmap port scanner
    o – sharK PWB++Remote administration tool to take full control of a remote computer
    o – CopernicCopernic Agent is the client to assist in searching the web. There are other very useful products as well, also to manage downloaded data. Very good and cheap set of tools.
    o – ZoteroFree plugin for o.a. Firefox, to collect, organize, cite and share your research sources.
    Introduction: Mainly used for forensic research, but also for other applications
    o – MaltegoMaltego is an open source intelligence and forensics application. Used to determine the relationships and real world links between: Groups of people (social networks), Companies, Organizations, Web sites, Internet infrastructure (Domains, DNS names, Netblocks, IP addresses, Phrases, Affiliations), documents and files. Not easy to use, and bit overwhelming interface, but a must have once the basics are mastered.
    o – Palantir“…a platform for information analysis. Palantir was designed for environments where the fragments of data that an analyst combines to tell the larger story are spread across a vast set of starting material. Palantir provides flexible tools to import and model data, intuitive constructs to search against this data, and powerful techniques to iteratively define and test hypotheses.”
    o – Recorded Future

    [searchers] [premium] [reference] [subject]

    edited 4may18 AR

    [searchers] [premium] [reference] [subject]

    o – Beaucoup
    o – RBA – Search Strategies for the Internet
    o – Search Engine Colossus
    o – Search Engine Watch
    o – Search Engines Worldwide

    [searchers] [premium] [reference] [subject]

    edited 10jun14 ar

    o – LyngLarge, extensive. International, listing almost all satellites and channels per country and satellite.
    o – SatcoDX
    Introduction: Also consider sites with recordings of recent TV broadcasts (Dutch: ‘uitzending gemist’) and off course Usenet.
    o – AOL videoStill in Beta. Formerly ”Singing Fish”. Channels,
    o – BlinkxSearch engine enables searching not only on added keywords but also on the transcript of the actual broadcast. Categorized. Uses Autonomy technology. Annoying adds.
    o – DailymotionFrench, Paris. Available in 34 localised versions and 16 languages. Better quality content. Teamed up with Blinkx.
    o – KeekNew social network that enables the upload of short videos (approx 30 sedonds). Searchable, follow users. In short: The Video-Twitter
    o – MetaCafe
    o – Search for Video
    o – VimeoFrom New York, originated from video makers.
    o – VineOwned by Twitter. Platform on iOS and Android to share short loop video clips. Mobile only. With chat function
    o – Yahoo video
    o – YouTubePrimary source for video clips and longer fragments. Censored. Bought by Google in 2006.
    o – BusinessAnd go to media and entertainment; film; docu and nonfiction; distributors.
    o – DocuSeek film & video finder
    o – DocumentaryTubeA directory to find handpicked documentaries. Contain a “Top 100”. Searchable or browse through the more than 30 categories.
    o – European Broadcasting Union
    o – Georgetown University LibraryLimited to the Middle East and Africa, but of high quality.
    o – Insight News TV
    o – Screencast-o-matic
    o – Google videoClosed down 2011, content removed.
    o – TruveoClosed. See AOL Video.

    [searchers] [premium] [reference] [subject]

    edited 03jun14 gn

    edited 03jun14 gn

    o – National Consortium for the Study of Terrorism and Responses to Terrorism (START) START is a university-based research and education center comprised of an international network of scholars committed to the scientific study of the causes and human consequences of terrorism in the United States and around the world.
    o – American Society for Industrial Security
    o – Combating Terrorism CenterIndependent research center located at West Point. Publishes a free monthly magazine, The Sentinel.
    o – Foundation for Defense of DemocraciesThe Foundation for Defense of Democracies is a policy institute focusing on terrorism, the ideologies that drive terrorism and the policies that can most effectively eradicate terrorism.
    o – Jamestown Foundation Terrorism MonitorPreviously named the “Global Terrorism Analysis Program” (GTA).
    o – National Counterterrorism Centerusa. nctc. “…integrating and analyzing all intelligence pertaining to terrorism possessed or acquired by the U.S. governmen, principal advisor to the DNI on intelligence operations and analysis relating to counterterrorism.”. Also pulishes the Counterterrorism Calendar, worldwide list of terrorist groups and profiles, interactive timeline maps.
    o – Perspectives on Terrorismperspectives on Terrorism is a peer-reviewed online journal of the Terrorism Research Initiative (TRI). It is published six times per year. Subscription is free.
    o – RAND Database of Worldwide Terrorism IncidentsThe RAND Database of Worldwide Terrorism Incidents (RDWTI) is a compilation of data from 1968 through 2009.
    o – RAND Terrorism and Homeland Security
    o – START’s Global Terrorism DatabaseThe Global Terrorism Database (GTD) is an open-source database including information on terrorist events around the world from 1970 through 2011 (with additional annual updates planned for the future). Includes more than 113,000 cases. Holds the Terrorist Organisation Profiles (TOP) from the MIPT Terrorism Knowledge Base (TKB).
    o – South Asia Terrorism PortalSATP is the largest website on terrorism and low intensity warfare in South Asia.
    o – The Washington Institute Policy Analysis: TerrorismPolicy and analytic publications focused on the Middle East. More than just terrorism related.
    o – Transnational Threats ProjectFrom the Center for Strategic and International Studies (CSIS). The Transnational Threats Project assesses terror, insurgent, and criminal networks and the impact of government responses through targeted field work and an extensive network of specialists.
    o – International Relations and Security Network: Terrorism issuesChoose “By subject” and then pick in the left side panel “Terrorism”. The result is a listing of publications from the better institutes and organisations. Select the information by publication, organisation, article, video or audio. Also take a look at the left side panel to select more specific sub-topics in the field of terrorism.
    o – internet websites and links for (counter) terrorism researchBy Berto Jongman. In: Perspectives on Terrorism. – Vol.5, no.1 (2011). Excellent categorised and annotated listing of selected terrorism websites: general sites, newsportals, databases, group profiles, journals, organisations, etc. Also covering major topics like cyberterrorism, global jihad.
    o – Counterterrorism BlogExcellent blog, seized at March 11 2011
    o – MIPT Terrorism Knowledge BaseOnline portal containing information on terrorist incidents, leaders, groups, and related court cases. The database is now hosted by the National Consortium for the Study of Terrorism and Responses to Terrorism. Profiles are still available via START

    [searchers] [premium] [reference] [subject]

    edited 6dec09 AR

    Moved to ‘Terminology‘.

    [searchers] [premium] [reference] [subject]

    edited 7jun17 AR

    Some links about transport in general. Also consider maps, airfields, public transport, and ports.
    o – Airline and Airport Code SearchReturn the names of airlines by their offically designated IATA codes
    o – Transportation Research Board of the National AcademiesTransportation Research Board.
    o – World road statisticsInternational Road Federation. ”…first appeared in 1958 and is based on data compiled from official sources within national statistics offices and national road administrations in up to 196 countries. The WRS contain data on over 100 variables, including data on road networks, vehicle fleets, road traffic accidents, fuel consumption, and road expenditures”.
    o – Airport codes The NetherlandsListing of all Dutch airports with airport codes (IATA and ICAO). International.
    o – Best wel snel!Plots the highest motorcar speeds measured (from 170 up) in The Netherlands on a map. Live. Based on open data provided by the NDW (Nationale Databank Wegverkeersgegevens)
    o – KentekenrapportDutch license plate information, returns specifications, car data and more. The fee-based version also returns history, owners, pictures and value.
    o – RDW kenteken gegevensDutch. Check on license numbers. Returns technical data of the vehicle. No owner history.
    o – Department for Transport

    [searchers] [premium] [reference] [subject]

    edited 12dec138 ar

    Directories of online surveillance or video camera’s, many unprotected.
    o – CamVista.comLive webcam views of some of the worlds’ favourite places. Travel portal.
    o – EarthCamWorldwide webcams by map or by rating. International. Searchable.
    o – Leonard’s Cam World15.000+ webcams worldwide.
    o – Network live IP video cameras directory Insecam.com“the world biggest directory of online surveillance security IP cameras. Watch live street, traffic, parking, office, road, beach, earth online webcams.”
    o – Online cameraLow number of worldwide webcams. Seems to have stopped since 2012.
    o – Shodan explore tag WebcamList of dozens of standard Shodan queries to find (un)protected webcams worldwide.
    o – Watch the newest CCTV live stream cameras worldwide

    [searchers] [premium] [reference] [subject]

    Source

    MediaWiki

    []

    From MediaWiki.org

    Jump to navigationJump to searchPhoto of participants of the Wikimedia Hackathon 2019

    The MediaWiki software is used by tens of thousands of websites and thousands of companies and organizations. It powers Wikipedia and also this website. MediaWiki helps you collect and organize knowledge and make it available to people. It’s powerful, multilingual, free and open, extensible, customizable, reliable, and free of charge. Find out more and if MediaWiki is right for you.

    OOjs UI icon advanced.svgSet up and run MediaWiki

    OOjs UI icon edit-ltr.svgEdit and use MediaWiki

    OOjs UI icon chem.svgDevelop and extend code

    OOjs UI icon ongoingConversation-rtl.svgGet help and contribute

    Echo (Notifications) megaphone.svgNews

    2020-06-24 SecurityMediaWiki 1.31.8, 1.33.4 and 1.34.2 security releases are now available. Note Note: MediaWiki 1.33.x versions are now end of life. 2020-05-09 to 2020-05-10 Wikimedia Remote Hackathon 2020 (online) 2020-03-30 Note Note: The Wikimedia Technical blog has been launched.

    More news F icon.svg Twitter logo blue.png

    Retrieved from “https://www.mediawiki.org/w/index.php?title=MediaWiki&oldid=3878227

    Source

    7 Search Engines That Pay You to Search the Web

    Why not get paid to do something you’re already doing? Stop missing out on easy money with these seven platforms!

    Best part? You’re able to use all of these simultaneously. We recommend, however, picking one survey, poll, video, etc. platform and sticking with it.

     

    1. Earn $0.05 – $1 per search with Qmee.

    Qmee is the simplest out of these seven options and might just be my favorite. No need to worry about the search engine you’re using, Qmee works with Google, Bing, Yahoo, Amazon and eBay!

    All you need to do is download the browser extension for Qmee (you’ll be prompted when you go to their website), search how you normally would, and you’ll see the dollars and cents add up in the corner! No point system, no gimmicks, and no limit to how much you can earn.

    You can cash out anytime with a PayPal account, transfer earnings to a gift card, or donate your earnings to charity.

    Tip: Use Qmee with Bing so you can rack up Bing Rewards money at the same time.

     

    2. Swagbucks gives you $0.03 – $0.39 per search.

    You probably already know about Swagbucks in terms of surveys, polls, and videos, but did you know that Swagbucks pays you to search as well? Search by setting Swagbucks as your default browser, or search directly from their website. And since they award you based on certain keywords, if you happen to search the right one, you’ll get rewarded! I searched the keyword “car tires” and earned 4 Swagbucks!

    Make sure to download the Swagbucks mobile app and their browser extension to easily see your earnings.

    Try out our top 10 Swagbucks hacks.

     

    3. Earn $0.01 per two searches with Bing Rewards.

    By switching your search engine to Bing, through Bing’s Rewards program, you earn points you can later redeem in things like gift cards, movies, electronics, and more. You can even donate your points to charity as well.

    Earn 1 credit with two searches, up to 15 credits on desktop, and 10 credits on mobile per day.

    Sign up for Bing Rewards here. To tell how many points you’re earning while you’re searching, download the Bing Rewards Chrome extension.

     

    4. iRazoo earns you $0.025 per search.

    With iRazoo, earn points through surveys, videos, tasks — and you guessed it — search! The difference with iRazoo is it’s ‘Search and Win,’ meaning points are awarded randomly. So the more you search on iRazoo, the more likely you are to earn!

    You can search directly on their website and cash out with Amazon gift cards.

    RELATED: Retailers That Send You Coupons When You Abandon Your Cart

     

    5. Earn $0.01- $0.02 per search with InboxDollars.

    InboxDollars is an excellent way to make money on all of the usual tasks you have in Swagbucks, iRazoo and GiftHulk. You can also earn up to $0.15 per day with search. Search directly from their website or the InboxDollars app (available for Apple and Android).

    Four searches will usually earn you $0.01 – $0.02. The payout may seem slim compared to the other options, but if InboxDollars is your platform of choice, adding search to your list isn’t a bad idea.

    They’ll send you a check in the mail when you decide to cash out ($30 minimum).

    Check out how we earned $18.25 in 10 minutes with InboxDollars!

     

    6. Earn $0.04 per search with GiftHulk.

    GiftHulk is very similar to Swagbucks and iRazoo in terms of combining efforts via survey, tasks, shopping, and when you use the search bar on their website. Sure, you’re limited to one search per hour, but over time this adds up.

    1,000 ‘HulkCoins’ equals $1, and the lowest cash-out for a gift card is $5.

    Tip: GiftHulk and iRazoo are Swagbucks wannabees, and while each bring something good to the table, if you’re going to pick one, Swagbucks is your best bet.

     

    7. Earn $0.01 per search with CashCrate.

    Earn money participating in surveys, watching videos, shopping, and searching with CashCrate! Enter your searches through CashCrate’s website and you’ll earn $0.01 per search, up to 10 times per day.

    No need to request a cash-out either; CashCrate will send you a check in the mail after each month when you meet their $20 minimum.

     

    Note: The seven platforms listed above are able to pay you because they have businesses that pay them to appear in search results and/or for having you complete tasks.

     



    Source