detection and disabling digital cameras
April 19, 2017 | Author: Vipin Mohan R Nair | Category: N/A
Short Description
Download detection and disabling digital cameras...
Description
CHAPTER-1 INTRODUCTION 1.1 MOTIVATION AND OVERVIEW
Digital cameras differ from traditional cameras in many ways. But the basic difference is that they use solid state image sensors to convert light to digital pictures rather than capturing the image on film. Digital imaging has actually been around for a
long
period
of
time,
but
it
has
been
used
for
other
purposes.
The history of digital technology began very early. NASA began dealing with digital imaging technology as far back as the 1960s, just as it did with many inventions that have become public domain, and NASA used it to convert signals from analogue to digital. Very soon, other governmental sectors saw the opportunities and advantages of this emerging digital technology and they began a similar program involving spy satellites. Today similar applications are available for free to anyone with internet access. For example Google's satellite maps show the whole world and even the moon. 1.12 DEVELOPMENT OF DIGITAL CAMERAS
The true digital cameras did not simply emerge as a new consumer product. There was several other products developed fist, which led to its creation. Digital cameras as we know them today first became available for consumers around the mid-70s. At that time, Kodak developed a number of solid state image sensors which converted available light into digital images. The target customers for the new Kodak
digital
cameras
were
both
professionals
and
hobbyists.
From that point on the camera industry began to develop faster and the ability to connect to the home computer to download pictures was introduced. The development
1
was combined with software to manipulate and edit pictures, and special printers dedicated to digital photography. 1.13 HOW DO DIGITAL CAMERAS WORK?
In the digital world, data, or information, is represented by strings of 1's and 0's. In this case these digits translate to the individual pixels or basic units that combine to make up the image you see. When the capture button on the camera is pressed, a charge coupled device (also known as a CCD) creates an electron equivalent of the captured light which in turn ends up converting the pixel value into a digital value. Each picture is stored in the camera's memory until it is downloaded to its destination, usually a computer or a CD. Usually, the form of camera memory is a memory card which can be replaced. Indeed, this is one of the great advantages over traditional cameras – you don’t have to buy films.
1.14 IMPORTANT FEATURES TO LOOK FOR IN A DIGITAL CAMERA
Resolution is one of the most important features and in many cases it is one of the top features that determine a camera's price. Resolution is a measure of detail that a specific camera will capture. The basic unit of measurement when referring to digital camera resolution is the pixel. The higher the number of pixels the better the is camera,
because
a
higher
level
of
detail
is
captured.
Digital cameras are rated in megapixels (millions of pixels). A 1.0 megapixel camera is considered not to be of quality while a 5.0 megapixel camera is often used in professional digital photography when creating studio grade portraits or taking pictures. The lens is very important when it comes to digital cameras because it focuses directly into what you intend to use the digital camera for. A lens that has a fixed focus and fixed zoom should just be used for simple snapshots. Zoom lenses come in two forms: the optical zoom lens and the digital zoom lens. The optical zoom is preferable because it zooms by changing the actual focal length of the lens
2
whereas the digital zoom uses an interpolation algorithm to zoom; it “infers” information by evaluating neighbor information. This results in a grainy photo. Replaceable lenses are found on many higher end cameras. The good thing about them is that they increase the camera's versatility. There can be found: zoom lenses, close-up
lenses,
color
lenses
for
effects,
and
panoramic
lenses.
How many useful digital camera accessories are available for a particular model? As already mentioned above, some cameras, like Kodak, offer a docking system which not only is the interface to the computer but also doubles as a battery charger when the camera is not in use, ensuring that it starts off with a full charge when needed. Choosing a digital camera is not easy, but if you have decided which particular model you need, you will enjoy taking digital pictures wherever you go to: on vacation, at a family dinner, at a party with friends, at school, etc.
1.2 LITERATURE SURVERY
The technology that is being used in this topic is image processing. This topic mainly deals with the method to detect a hidden camera and the ways by which we can
neutralize
it.
An
image
can
be
defined
as
a
two
dimensional
function,f(x,y),where x and y are spatial coordinates and amplitude of ‘f’ at any points(x, y) is called intensity. The field of image processing refers to processing refers to
processing digital images by means of a digital computer. Image
processing can be used in the field like x-ray imaging, gamma imaging, imaging in the microwave band etc. 1.21 IMAGE SEGMENTATION Segmentation is a process that partitions an image into regions. If we wish to segment an image based on color, and, in addition,we want to carry out the process on individual planes ,it is natural to think first of HSI color space because color is conveniently represented in the hue image. Segmentation is one area in which better results can be obtained by using RGB color vectors.
3
1.22 THRESHOLDING Because of its intuitive properties, simplicity of implementation, and
computational
speed, image Thresholding enjoys a central position in applications of image segmentation . Consider an image f(x, y), composed of light objects on a dark back ground, in such a way that object and background pixels have intensity values grouped into two dominant modes. At any point (x, y) in the image at which f(x, y)>T is called an object point ; otherwise the point is called a background point. When T is a constant applicable over an entire image ,the process given in this is referred to as global Thresholding. When the value of T changes over an image, we use the tem variable Thresholding. The term local or regional Thresholding is used sometimes to denote variable Thresholding in which value of T at any point (x,y) in aimage depends on properties of a neighborhood of (x,y).
4
CHAPTER-2 EXISTING SYSTEM 2.1 INTRODUCTION
A new method for the problem of digital camera identification from its images based on the sensor’s pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread spectrum watermark, whose presence in the image is established using a correlation detector. Experiments on approximately 320 images taken with 9consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction.
2.2 EXPLANATION
As digital images and video continue to replace their analogcounterparts, the importance of reliable,
inexpensive, and fast identification of digital image origin
will only increase. Reliable identification of the device used to acquire particular digital image would especially prove useful in the court for establishing the origin of images presented as evidence. In the same manner as bullet scratches allow forensic examiners to match a bullet to a particular barrel with reliability high enough to be accepted in courts, a digital equivalent of bullet scratches should allow reliable matching of a digital image to a sensor. In this paper, we propose to use the sensor pattern noise as the tell-tale “scratches” and show that identification is possible even from processed images.
5
We have developed a new approach to the problem of camera identification from images. Our identification method uses the pixel non-uniformity noise which is a stochastic component of the pattern noise common to all digital imaging sensors (CCD, CMOS, including Fovea™ X3, and JFET).The presence of this noise is established using correlation as in detection of spread spectrum watermarks. We investigated the reliability of camera identification from images processed using JPEG compression, gamma correction, and a combination of JPEG compression and in-camera resampling. Experimental results were evaluated using FAR and FRR error rates. We note that the proposed method was successful in distinguishing between two cameras ofthesame brand andmodel.Techniques, are described here may help usalleviate the computational complexity of brute force searches by retrieving some information about applied geometrical Operations. The searches will, however, inevitably increase the FAR. We would like to point out that the problem of camera identification should be approached from multiple directions, combining the evidence from other methods, such as the feature-based identification , which is less likely to be influenced by geometrical transformations. 2.3 FORGING AND MALICIOUS PROCESSING
Since camera identification techniques are likely to be used in the court, we need to address malicious attacks intended to fool the identification algorithm, such as intentionally removing the pattern noise from an image to prevent identification or extracting the noise and copying it to another image to make it appear as if the image Was taken with a particular camera. We distinguish two situations: 1) the attacker is informed and has either the camera or many images taken by the camera or 2) the attacker is uninformed in the sense that he only has access to one image.
6
CHAPTER-3 PROPOSED SYSTEM
3.1 INTRODUCTION
The system locates the camera, and then neutralizes it. Every digital camera has an image sensor known as a CCD, which is retro reflective and sends light backing directly to its original source at the same angle. Using this property and algorithms of image processing the camera is detected. Once identified, the device would beam an invisible infrared laser into the camera's lens, in effect overexposing the photo and rendering it useless. Low levels of energy neutralize cameras but are neither a health danger to operators nor a physical risk to cameras. Digital cameras differ from traditional cameras in many ways. But the basic difference is that they use solid state image sensors to convert light to digital pictures rather than capturing the image on film. Digital imaging has actually been around for a long period of time, but it has been used for other purposes.
7
3.2 DESIGN AND ARCHITECTURE
Timing and Control
Scanning Infrared Emitter
CCD
Test Image Recorder
Overexposure IR Laser Beam Image Processing Unit
Infrared Laser Beam Projector
Camera Locator
DETECTOR UNIT
DISABLING UNIT
Fig 3.1BLOCK DIAGRAM
3.3 RETRO REFLECTION BY CCD A retro reflector is a device or surface that reflects light back to its source with a minimum scattering of light and at same angle. An electromagnetic wave front is reflected back along a vector that is parallel to but opposite in direction from the wave's source. The device or surface's angle of incidence is greater than zero. The CCD of the camera exhibits this property due to its shape. This forms the principle for this device.
8
Fig 3.2: Retro reflection by CCD
3.4 CAMERA DETECTION 3.4.1SCANNING The entire area to be protected is scanned by using infrared light. Infrared LED is used for producing them. The circuitry required for producing infrared beams are simple and cheap in nature. The scanning beams sweep through the vertical and horizontal direction of the area, to ensure no camera escapes from the device.
3.4.2 WAVELENGTH
The infrared beam used here has the center wavelength of 800-900 nm. This wavelength falls under the near infrared classification. The reason for choosing near infrared are the molar absorptivity in the near IR region is typically quite small and it typically penetrate much farther into a sample than mid infrared radiation so that the retro reflections would be of high intensity. The generation of NIR is achieved using IR LED. Due to the retro reflective property of the CCD the part of the light gets retro reflected by it and the infrared beam does not have any effect on the other objects hit the area other than the CCD.
9
Fig 3.3: Plot of Reflectance vs. Wavelength of Near IR Standard
3.4.3 TEST IMAGE CAPTURE
The area being scanned by the infrared beams are simultaneously recorded. The preprocessing image being acquired is called as the test image. It forms the basis of the further steps of the process. The test image is obtained by use of high resolution camcorders. The response of the test image capture should be very fast in order to sense even a small change of position of the camera. The camcorder should have a wide angle of capture so that it can capture a wide test image to cover the entire area. The retro reflected beams also have the same properties of the near IR. Therefore, they are visible to the camcorders and invisible to human eyes.
3.5 IMAGE PROCESSING It is most important aspect of the device. The raw image for image processing is the test image being streamed lively. The detection of the camera is accomplished in this stage only. The image processing for detection can be done in two steps.
10
We have coded an algorithm in Mat lab software to perform the image processing operation. 3.5.1 DETECTION OF RETRO REFLECTING AREA The camera is detected by the differentiation of the retro reflecting area from the rest of the test image. The camera lens also appears red in color and the rest part appears normal. This key point is used for differentiation.
3.5.2 THRESHOLDING During the Thresholding process, individual pixels in an image are marked as “object” pixels if their value is greater than some threshold value (assuming an object to be brighter than the background) and as “background” pixels otherwise. The separate RGB Components are determined and a threshold value is set. 1. An initial threshold (T) is chosen; this can be done randomly or according to any other method desired. 2. The image is segmented into object and background pixels as described above, creating two sets: 1.
G1 = {f(m , n):f(m ,n)>T} (object pixels)
2.
G2 = {f(m ,n):f(m ,n)T} (background pixels) (note, f(m ,n) is the value of the pixel located in the m
th
th
column, n row)
3. The average of each set is computed. 1.
m1 = average value of G1
2.
m2 = average value of G2
4. A new threshold is created that is the average of m1 and m2 1. T’ = (m1 + m2)/2
11
5. Go back to step two, now using the new threshold computed in step four, keep repeating until the new threshold matches the one before it (i.e. until convergence has been reached). This iterative algorithm is a special one-dimensional case of the k-means clustering algorithm, which has been proven to converge at a local minimum—meaning that a different initial threshold may give a different final result. K-means clustering is a method of vector quantization, originally from signal processing, that is popular for cluster analysis in data mining. K-means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean, serving as a prototype of the cluster. This results in a partitioning of the data space into Voronoi cells. K-means clustering tends to find clusters of comparable spatial extent, while the expectation-maximization mechanism allows clusters to have different shapes.
Demonstration of the standard algorithm
Fig 3.4:
K initial "means" (in this case k=3) are randomly generated within the data
domain (shown in color).
12
Fig 3.5: K
clusters are created by associating every observation with the nearest mean.
The partitions here represent theVoronoi diagram generated by the means.
Fig 3.6:
The centroid of each of thek clusters becomes the new mean.
Fig 3.7: figure of
convergence.
As it is a heuristic algorithm, there is no guarantee that it will converge to the global optimum, and the result may depend on the initial clusters. As the algorithm is usually very fast, it is common to run it multiple times with different starting conditions. However, in the worst case, k-means can be very slow to converge: in particular it has been shown that there exist certain point sets, even in 2 dimensions, on which kmeans takes exponential time, that is 2Ω(n), to converge. These point sets do not seem
13
to arise in practice: this is corroborated by the fact that the smoothed running time of k-means is polynomial. The "assignment" step is also referred to as expectation step, the "update step" as maximization step, making this algorithm a variant of the generalized expectationmaximization algorithm. 3.5.3 COMPLEXITY Regarding computational complexity, finding the optimal solution to the k-means clustering problem for observations in d dimensions is:
NP-hard in general Euclidean space d even for 2 clusters
NP-hard for a general number of clusters k even in the plane
If k and d (the dimension) are fixed, the problem can be exactly solved in time O(ndk+1 log n), where n is the number of entities to be clustered
3.5.4 COLOR SEGMENTATION We need to detect only the red infrared part of the image. This is done by means of color segmentation. The RGB Components are filtered separately and finally the red area is detected. The following algorithm was used for the purpose
Img=imread('sample.jpg'); %imshow(img) img=imfilter(img,ones(3,3)/9); %img=imresize(img,0.1); %Decomposetoseparatecolorcomponents xr=img(:,:,1); [N,M]=size(img); m=4;
14
w=1/m; F=fftshift(fft(double(img))); fori=1:N forj=1:M r2=(i-round(N/2))^2+(j-round(N/2))^2; if(r2>round((N/2*w)^2)) F(i,j)=0; end; end; end; Idown=real(ifft2(fftshift(F))); 3.6 DISABLING OF DIGITAL CAMERA 3.6.1OVEREXPOSE
Once the camera lens has been located it has to be over exposed. A photograph may be described as overexposed when it has a loss of highlight detail, i.e. when the bright parts of an image are effectively all white, known as "blown out highlights". Since the infrared beam is of high intensity rather than the other light incident on the lens from the image, the camera tends to be overexposed. The auto focusing mechanism of the camera adjusts the position of the lens to focus on the infrared beam. This causes nonfocusing of the camera on the image that is to be prevented from capturing. Example of overexposure by infrared laser
15
Fig3.8 : Normal exposure
Fig 3.9 : Effect of over-exposure
3.7 SURROUNDING ADAPTIVE OVEREXPOSURE BEAM WAVELENGTH The wavelength of the infrared beam being emitted intermittently is not constant. The wavelength is altered according to the lighting nature of the environment. This is achieved by using a sensor which is based on photo detector. If the surrounding is dark the beam of center wavelength of 900-980 nm is emitted. . If the surrounding is bright the beam of center wavelength of 800-900 nm is emitted.
16
3.8 OBSERVATIONS
Fig 3.10:shows the photo captured normally without using the camera disabling device
Fig 3.11: shows the photo captured after using the camera disabling device It is observed that the image quality has been decreased to a great extent. This could be used to diminish the clarity and the visibility of the image being captured.
17
CHAPTER-4 APPLICATION 4.1SIMPLIFIED DESIGN FOR USE IN THEATRES
18
Fig 4.1:Design used in theatre The film industry losses about 3 billion dollar a year due to movie piracy. This is the method that can deployed to prevent piracy.Infra-red light emitting diodes are placed behind the theatre screen. The beams are emitted intermittently. The wavelength of the beam and the timing is varied continuously using the timing and control unit. This beam can be detected by the camera CCD sensors. Since the beams is of high intensity and narrow the auto focusing feature the camera gets detoriated . So the video which is being tried to capture on the camera falls out of focus. The quality of the image therefore obtained is of poor clarity. Thereby, the aim of pirating the movie is destructed. Thence the infrared beam does not fall within the visible range of the human sight it remains invisible to the human eyes. Therefore, the overexposure beam does not affect the movie being played on the screen. 1 – Camera Disabling Device 2- Theatre Screen 3- Camera used for piracy 4- Theatre Projector 4.2 Merits The circuitry and devices used for this technique are simple in nature. The type of radiation is proven to be not harmful to humans. It can be implemented easily in any type of rooms, buildings, theatres etc. without any alteration to the existing area. Since the method uses a low cost technology it can be implemented at a comparatively less expense.
19
CHAPTER-5 CONCLUSION The device explained above can thus be used to disable and detect hidden cameras and provides protection to all surroundings. The device explained above can prove to be essential to all environments like theatres, lockers, private areas, anti-espionage systems, defense secrecy etc. This technology if developed to a good extent it would be of great help prevents piracy, maintain national secrecy in etc.
20
REFERENCES
[1]. “Optical principles and technology for engineers “- James EStewart [2]. “Infrared optics and zoom lenses“ By Allen Mann [3]. “Digital image processing using Matlab“By Rafael C.González, Richard Eugene Wood [4]. Blythe, P. and Fridrich, J: “Secure Digital Camera,” “ Digital Forensic”. [5]. Digital Image processing: algorithms and systems San Jose,Jaakko Astola, Karen Egiazarian, Edward R. Dougherty
21
View more...
Comments