Paper Title (use style: paper title)

2014 International Conference on Indoor Positioning and Indoor Navigation, 27th-30th October 2014
LED-Tracking and ID-Estimation for Indoor
Positioning using Visible Light Communication
Yohei Nakazawa, Hideo Makino, Kentaro Nishimori
Daisuke Wakatsuki*, Hideki Komagata**
Dept. of Information Engineering
Niigata University
Niigata, Japan
*Tsukuba University of Technology,
**Saitama Medical School
*Tsukuba, **Saitama, Japan
Abstract—We are focusing our research on indoor positioning
technology; specifically, a type that uses Visible Light
Communication (VLC); modulatable LED lights transmit data at
9600 bps, using 4 Pulse Position Modulation (4PPM), while a fisheye lens-equipped camera receives the light signal over a 160degree field-of-view. This type of lighting requires neither
additional space nor -power. We assigned a unique ID to each LED,
in order to recognize its position. Self-location is calculated from
the relationship between the LED positions and coordinates on the
image plane. In our previous research, we confirmed that selflocation can be determined within 10 cm, using our system.
However, we needed to attach dedicated transmitters to each LED
used for positioning, especially in large buildings such as hospitals
and shopping malls. So, in this paper, we propose LED-tracking
and ID-estimation using LEDs with known IDs; doing so will
significantly reduce the cost of installing- and running
transmitters. Additionally, with the increased use of LEDs for
positioning, accuracy naturally improves. We conducted
experiments with the camera moving in 2 different environments:
a) a small area, with just 4 LEDs; b) the VLC platform with a total
of 24 LEDs, to demonstrate that as many as 13 LEDs can be
identified. With 2 or more IDs detected beforehand, unidentified
LEDs, as well as some that failed to be tracked, could be estimated,
while the camera was in motion. Average positioning error in the
smaller environment and the VLC platform were 3.78 cm and 6.96
cm, respectively. From this, location can be determined, even when
some LEDs are offline.
Keywords—Indoor Positioning; Visible Light Communication;
LED light; Fish-eye camera; Image sensor
I.
INTRODUCTION
Mobile factory-, or hospital-robots need to be controlled
based on precise location information, since they always have to
recognize the distance and direction toward their destination and
need to pass through small rooms, or narrow corridors. Likewise,
pedestrian- navigation also requires precise indoor localization.
Especially for the visually-impaired, state-of-the-art audio
navigation systems give users a feel for their surroundings, and
a greater sense of control.
We are focusing our research on indoor positioning
technology; specifically, a type using Visible Light
Communication (VLC); automatically-modulated LED lights
transmit data, while an image sensor receives the light signal [1].
LED light-systems have spread widely and requires neither
additional space nor additional power. We assigned a unique ID
to each LED, in order to recognize its position. A practical
experiment for pedestrian navigation, in which photo-sensors
were are used as the VLC receiver took place and effectiveness
of VLC was confirmed [2]. In order to increase localization
accuracy, we replaced the photo-sensors with a CMOS camera.
By these methods, self-location is calculated from the
relationship between the LED position data and coordinates on
the image plane.
Reference [3] is similar to our technology, which uses
infrared LEDs as transmitters and the fish-eye lens-equipped
camera as the receiver. Infrared LEDs are turned on and off
using radio waves from the receiver, in order to identify them.
However, only one receiver can be used in this system. And,
because the transmitters need to be controlled by this single
receiver, this method is not suitable for multi-user navigation
systems.
In our previous research, we confirmed that self-location can
be determined within 10 cm, using our system [4]. However, we
needed to attach dedicated transmitters to each LED used for
positioning, especially in large buildings such as hospitals and
shopping malls. Because of the need for so many LEDs, the
installation and running costs were projected to be considerable.
If some LEDs failed to transmit their IDs, locations could not be
determined. Therefore, we propose ID-estimation using LEDs
with known IDs; doing so will significantly reduce installationand running-costs. Additionally, with the increased use of LEDs
for positioning, accuracy naturally improves.
So, in this paper, we describe the number of ID-signatures
correctly received and the success-rate of ID-estimation, in
positioning experiments. We then confirm the effectiveness of
the proposed method based on the test-results.
II.
PROPOSED METHOD
A. Device configuration
Fig.1 shows the system overview. The receiver consists of a
fish-eye lens-equipped camera and a laptop. The camera
receives light signals over a 160-degree field-of-view. The
image sensor is a Complimentary Metal-Oxide Semiconductor
(CMOS) device capable of sampling light intensity on 4
specified pixels simultaneously. In Fig. 2, we use a laptop PC
(Dell Inc., Latitude e5530, Core i5 2.50 GHz) for LED-position
detection, sending observation points to the camera, decoding
light signals and self-localization [4]. We attached the camera to
2014 International Conference on Indoor Positioning and Indoor Navigation, 27th-30th October 2014
the mobile robot (LEGO MINDSTORMS EV3), in order to
make it more portable.
A unique ID is assigned to each LED light using Ubiquitous
Code (ucode) [5]. Ucode’s function is to identify each object,
place and concept in the real world. The code is set to “readonly”, for each LED light, at the time of shipment, and managed
by the Ubiquitous ID Center, to prevent reuse or overwriting for
security.
LED lights emit ID signals at 9600 bps, using 4 Pulse
Position Modulation (4PPM), based on the JEITA CP-1223
standard [6]. The modulation rule of 4PPM is represented in
Table 1. In 4PPM, a definite time-period, defined by the term“symbol” is equally divided into 4 “slots”. Only 1 pulse,
equaling 1 slot-width is allowed for a given symbol, and 2 bits
of data are assigned to each pulse-position. A 316-bit databundle referred to as a “frame” containing ID information is
repeatedly transmitted from each LED. The structure of a VLC
frame is shown in Table 2. Since the world coordinates of an
LED are paired with its ID and stored in the database beforehand,
the LED location is obtained using received ID and its database.
LEDs’ intensity is modulated according to the ID information.
When data ‘0’ is transmitted, light is emitted at maximum
brightness, but, is decreased about 10% when the transmitted
data is ‘1’, so users are unaware of any flickering of the LEDs.
B. VLC receiver
Table 3 shows the specification of the camera. VLC
receivers use fish-eye lenses (FIT Corp., FI-23, 160-degree field
of view) to obtain wide-angle images with an effective range of
115 degrees, and 112 degrees along horizontal and vertical
planes. Their resolution is the standard 128 by 120 pixels, with
gray scale image-output of 8 bits. The receiver can sample
intensity on specified pixels, at 20.8µsec cycles. If observation
points are specified, 2048 samples of intensity are stored, and
then transmitted to the PC, via USB cable.
C. Positioning process
When the camera is stabilized, LED detection, ID decoding,
ID-estimation and position determination are conducted, in that
order. With the camera in motion, LED-detection and –tracking
are conducted. ID-estimation and position-detection are then
conducted.
1) LED detection
First, ceiling-images are obtained and converted to binary
format, by discriminant analysis. Then, a unique number is
assigned to each bright region to identify it and the center of
gravity of each region is calculated to obtain LED positions.
Finally, the centers of gravity were transformed from the
fisheye-lens’ coordinates to the plane coordinates.
2) LED tracking
Once the receiver obtains the observation-points, they are
fixed, even when the camera is in-motion, so, the only way to
make changes, is to do a complete reset. Thus, if the camera is
moved for any reason, while information is being saved, there is
some risk that ID information may be compromised, so, we use
an “optical flow” feature” , -a vector representing translation of
a point between 2 images, -to track the LED. Since LEDs are
tracked, and correct IDs are assigned to them using optical flow,
it is not necessary to receive light signals, while the camera is in
motion. We implemented the optical flow feature based on
Lucas-Kanade’s pyramidal algorithm which is quick, and has
proven robustness [7, 8].
Fig. 1.
System overview.
Data
4PPM signal
TABLE 1. 4PPM
00
01
1000
0100
10
0010
TABLE 2. VLC FRAME FORMAT
Start of frame
Payload
Preamble Frame type
Data
4PPM signal
12 bit
TABLE 3.
Sensor type
Resolution
Pixel size
Sampling interval
Dimension
Weight
Interface
Fig. 2.
Visible light communication.
8 bit
16 bit
11
0001
End of frame
Data
CRC
128 bit
256 bit
16 bit
32 bit
CAMERA SPECIFICATIONS
CMOS image sensor
128×120 pixels
36.7μm×35.0 μm
20.8μsec
50×50×39 mm (excluding fish-eye lens)
70.9g (excluding fish-eye lens)
USB2.0
2014 International Conference on Indoor Positioning and Indoor Navigation, 27th-30th October 2014
4) Position-detection
a) Coordinate system
We defined the world coordinates and the camera
coordinates, as shown in Fig. 3. The origin of the camera
coordinates Oc is in good correspondence with the apex of the
lens, and the Zc axis is identical to the camera’s optical axis.
When Xc axis points in the Xw direction, the azimuth φ equals 0.0
degrees, and increases in a counterclockwise direction. The
image coordinates are defined as shown in Fig. 4.
Fig. 3.
Fig. 4.
The world coordinates and the camera coordinates.
The camera coordinates and the image coordinates.
3) ID-estimation
ID information assigned to an LED light can be estimated
when absolute coordinates of several LEDs and IDs are known.
Our system estimates unknown IDs based on those that are
known, and the distance and azimuth between 2 or more LEDs.
Step 1: The rotation angle in the camera coordinate system
is calculated based on the relationship between 2 LEDs which
have known IDs. This rotation angle is used to match the camera
coordinate system with the world coordinate system.
Step 2: LED 1 with its known ID, and another LED with its
unknown ID, are selected from the image. The centers of gravity
of these 2 LEDs are then detected.
Step 3: The distance and the azimuth between 2 LEDs are
calculated using the centers of gravity obtained at step 2. The
distance and azimuth in the image coordinates are transformed
into those in the world coordinate system.
Step 4: LED 2 located close to LED 1 in the world
coordinates are selected from the database, and the distance and
azimuth between the camera and LED 2 are calculated.
Step 5: If the position of LED 2 calculated in step 3 is close
enough to the unknown LED, the ID of the LED 2 is assigned
to the unknown LED.
The process applies to all of the unknown LEDs seen in the
image.
b) Location calculation
The horizontal position of the camera (xw, yw) and the
azimuth φ are calculated using the perspective projection
coordinates Pvi (Xvi, Yvi) and the world coordinates Pw (Xwi, Ywi,
Zwi) of LEDs. We use Levenberg-Marquardt’s method as a nonlinear least squares solver, which is highly robust, and able to
converge quickly [9]. The relation between the perspective
projection coordinates Pvi (Xvi, Yvi) and the world coordinates Pw
(Xwi, Ywi, Zwi) are represented as follows.
𝑥𝑐𝑖
𝑥𝑤𝑖
𝑐𝑜𝑠 𝜑 −𝑠𝑖𝑛 𝜑 0 −𝑥 𝑥𝑤𝑖
𝑦𝑐𝑖
𝑦𝑤𝑖
𝑦
𝑠𝑖𝑛
𝜑
𝑐𝑜𝑠
𝜑
0
−𝑦
[ ]=[
] [ ] = 𝑇𝑐𝑤 [ 𝑤𝑖 ]
𝑧𝑐𝑖
𝑧𝑤𝑖
0
0
1 −𝑧 𝑧𝑤𝑖
1
1
1
0
0
0 1
𝑥𝑐𝑖
𝑥𝑐𝑖
𝑓𝑘
0
𝑜
0
𝑋𝑣𝑖
𝑥
𝑥
𝑦
𝑦
[ 𝑌𝑣𝑖 ] = [ 0
𝑓𝑘𝑦 𝑜𝑦 0] [ 𝑐𝑖 ] = 𝐶 [ 𝑐𝑖 ]
𝑧𝑐𝑖
𝑧𝑐𝑖
1
0
0
1 0 1
1
𝑥𝑤𝑖
𝑋𝑣𝑖
𝑦
[ 𝑌𝑣𝑖 ] = 𝐶𝑇𝑐𝑤 [ 𝑤𝑖 ]
𝑧𝑤𝑖
1
1
(1)
(2)
(3)
Where, f represents the focal length of the lens, (kx, ky)
describes pixel-size and (ox, oy) is the theoretical center of the
image. The world coordinates are transformed into camera
coordinates using the camera’s external parameters Tcw, while
the camera coordinates are transformed into the perspective
projection coordinates, using the internal parameter. When the
camera detected an LED, the following 2 equations were
obtained:
𝑥𝑤𝑖 𝑐𝑜𝑠 𝜑 − 𝑦𝑤𝑖 𝑠𝑖𝑛 𝜑 − 𝑥 − 𝑋𝑣𝑖 (𝑧𝑤𝑖 − 𝑧) = 0
𝑥𝑤𝑖 𝑠𝑖𝑛 𝜑 + 𝑦𝑤𝑖 𝑐𝑜𝑠 𝜑 − 𝑦 − 𝑌𝑣𝑖 (𝑧𝑤𝑖 − 𝑧) = 0
(4)
(5)
Since the camera’s height is fixed, that leaves 3 unknown
parameters to contend with; the horizontal position on Xw-Yw
plane and the azimuth. Therefore, the position and the azimuth
can be calculated, when more than half the number of
unknown parameters (2 or more LEDs) are detected.
2014 International Conference on Indoor Positioning and Indoor Navigation, 27th-30th October 2014
Fig. 5.
The basic measurement setup.
III.
EXPERIMENT
A. Experiment with the fixed camera
We conducted ID-estimation experiments with the camera
fixed at set measuring points, in 2 different environments: a) a
small one for basic measurement, having just 4 LEDs (Panasonic
Corp., NNN62022K, 11 cm in diameter), and b) the practical
VLC platform (the 1st floor of the Information Engineering
Building, Niigata University), having a total of 24 LEDs
(Panasonic Corp., NNN73072K, 13 cm in diameter). An
overhead view is shown in Fig. 5. LEDs and measuring points
in the basic measurement setup and the practical VLC platform
are represented in Fig. 6 and Fig. 7. The yellow circles are LEDs
and the x’s are measuring points.
Fig. 6.
LEDs and measuring points in the basic measurement setup.
.
The basic setup measures 100 cm by 90 cm, and is 1 meter
in height. Sixteen measuring points are positioned at 20 cm
intervals. The practical VLC platform measures 5.4 m by 7.5 m,
and is 3.24 m in height, for 8 LEDs, while the other LEDs are
positioned at an elevation of 2.95 m. Nine measuring points are
set at 1 meter intervals. We set the receiver at each measuring
point to calculate the success-rate of ID-estimation and
positioning error. At every measuring point, the camera’s
azimuth is set, such that it faces 0, 90, 180 and -90 degrees. The
apex of the camera-lens (pointing upward) is 10 cm from the
floor.
We simulated the arrangement of LED lights on the practical
VLC platform shown in Fig. 7, and examined the influence of
quantization error on image-sensor resolution. The camera is
assumed to be set at each measuring point with the lens facing
upward, allowing us to generate binary images at every
measuring point. If an LED light is captured in a direction of
incidence into an image pixel, that pixel is set to white. If not,
the pixel is black. Centers of gravity are detected from the white
regions in the generated binary images. These gravity-centers
are used as the LED-lights’ image coordinates, on the image
plane. We generated images at 2 different resolutions: (128 by
120 pixels and 256 by 240 pixels) to compare errors by imagequantization.
Fig. 7.
The practical VLC platform.
B. Experiment with the camera moving
We also conducted experiments using a receiver attached to
“EV3”, in both the basic measurement setup and the VLC
platform. EV3 moves at a speed of 10 cm/s, along a 60-cm
trajectory (red arrow), in the basic measurement environment,
and 2 m (at the same speed), on the VLC platform.
2014 International Conference on Indoor Positioning and Indoor Navigation, 27th-30th October 2014
IV.
RESULTS
A. Fixed experiments
1) Basic measurement setup
Two LEDs were detected in 72.5 % of the 3200
measurements taken, -and only one was detected, at 23.5 %.
When 2 LEDs were detected, the IDs of 2 others were correctly
estimated. Average-error, maximum-error and standard
deviation in the position and the azimuth at the basic
measurement setup are shown in Table 4. Average-, and
maximum positioning-error were 0.03 m and 0.12 m,
respectively, while average- and maximum azimuth error were
0.49 degrees and 1.81 degrees.
2) Practical VLC platform
Next, we describe results obtained during experiments on the
practical VLC platform.
The camera was unable to receive light signals from the
ceiling of the VLC platform. This is because the height of the
VLC platform’s ceiling is higher than that of the basic
measurement setup, and the intensity of signals received was
decreased. Therefore, we manually assigned IDs to the 2 LEDs
nearest the measuring point.
We took a total of 25 measurements, at every point. The ratio
of ID-assigned LEDs to observed LEDs was 44.3 %, and the
maximum number of estimated LEDs was 13. The rate of correct
ID-estimation was 94.7 %. LEDs which had not been assigned
IDs or were assigned IDs that were incorrect, were all located at
angles of less than 60 degrees in elevation.
B. Moving experiments
1) Basic measurement setup
We took a total of 10 measurements. Two LEDs were
detected, while IDs were correctly estimated and assigned to the
remaining two, before EV3 moved forward. While in motion,
the device automatically determined its location using estimated
IDs. ID-tracking was processed repeatedly, over a 200ms cycle,
and all LEDs were correctly traced, until EV3 reached the endpoint, in its trajectory. In Fig. 8 and Fig. 9, average and
maximum positioning error were 3.78 cm and 6.94 cm,
respectively, while average and maximum azimuth error were
0.90 degrees and 1.91 degrees.
2) Practical VLC platform
We assigned IDs to the 2 nearest LEDs, -just as in the fixed
experiment. We took a total of 10 measurements, with the robotcamera moving on the VLC platform. From the two LEDs, seven
to ten IDs were correctly estimated, before moving the camera.
ID-tracking was processed, at 172ms intervals. Fig. 10 shows
the number of detected LEDs. ID-estimation was processed,
even while moving and 10 new IDs are correctly estimated, by
the end point. The success-rate of the ID-estimation was 98.8 %,
and the average number of tracked IDs was 17. In Fig. 11 and
Fig. 12, Average and maximum positioning error were 6.96 cm
and 29.9 cm, respectively, while average and maximum azimuth
error were 1.10 degrees and 3.68 degrees.
With the azimuth set to -90 degrees, the success-rates of the
ID-estimation vary from a minimum of 55.6 %, at (1 m, 1 m), to
a maximum of 100 %, while no incorrect estimation occurred
with an azimuth of 0 degrees. Occasionally, incorrect IDs would
be assigned, when distances were accurate, and the azimuth was
not, in relation to the known LED.
Positions were measured, using 5 or more LEDs at every
measuring point. Average-error, maximum-error and standard
deviation in the position and the azimuth on the VLC platform
are shown in Table 5. Average- and maximum positioning- error
were 0.18 m and 0.46 m, respectively, while average and
maximum azimuth error were 1.80 degrees and 8.37 degrees.
We excluded any positioning results with incorrectly-assigned
IDs. However, in the simulation, average- and maximum
positioning error were 5 mm and 8 mm, using the 128 x 120
pixel image sensor, while they measured 1 mm and 2 mm, using
256 x 240.
TABLE 4.
ERRORS AT THE BASIC MEASUREMENT SETUP
Average
Maximum
SD
0.03
0.12
0.08
Position (m)
0.49
1.81
0.43
Azimuth (degrees)
TABLE 5.
ERRORS AT THE PRACTICAL VLC PLATFORM
Average
Maximum
SD
0.18
0.46
0.10
Position (m)
1.80
8.37
3.42
Azimuth (degrees)
Fig. 8.
Position error in x direction at the basic measurement setup.
2014 International Conference on Indoor Positioning and Indoor Navigation, 27th-30th October 2014
Fig. 9.
Fig. 10.
The number of detected LEDs.
Fig. 11.
Position error in x direction at the practical VLC platform.
Directional error in the azimuth at the basic measurement setup.
V.
DISCUSSION
A. Positioning error
1) Fixed experiments
In the simulation results using a 128 x 120 pixel image
sensor, average positioning error was 5 mm. Using 256 x 240
pixels, the average error was less than that using 128 x 120 pixels.
Therefore, higher positioning accuracy can be achieved using
higher resolution image sensors.
In the basic measurement setup, average positioning error
and azimuth error were 0.03 m and 0.49 degrees, respectively.
On the practical VLC platform, average positioning error and
azimuth error were 0.18 m and 1.80 degrees. Average error in
experiments was larger than that in the results of the simulation.
Possible reasons for the discrepancy: a) lack of
correspondence between the image-plane coordinates and fisheye coordinates; b) incorrect calculation of distance and azimuth,
due to the low resolution of the image;
First, the perspective projection and fish-eye coordinates did
not correspond well with one another, due to unique distortions
in the lens that must be measured, in order to accurately calibrate
it [10, 11].
Next, Fig. 13 shows an LED, obtained using 128 x 120 pixeland HD cameras. On the practical VLC platform, where ceilingheight measures 3.14 m, 3 to 8 pixels are used to depict the LED,
using a 128 x 120 pixel camera, while roughly 200 pixels are
required, with a HD camera. As a result, centers of gravity
include relatively large errors with the low-resolution camera.
2014 International Conference on Indoor Positioning and Indoor Navigation, 27th-30th October 2014
a)
Fig. 12.
128 x 120 pixel camera
Directional error in the azimuth at the VLC-platform setup.
2) Moving experiments
With the camera moving on the practical VLC platform,
average positioning error was less than 10cm. Average azimuth
error was roughly 1 degree. Therefore, accurate positioning can
be achieved in real time, even when the camera is moving, using
the method we propose.
Because precise arrangement of LEDs is critical, subsequent
investigations will work to verify the amount of error caused by
LED-placement.
B. Success-rate of ID estimation
1) Fixed experiments
In the basic measurement setup, we could estimate the IDs
of 44.3 % of observed LEDs, with an accuracy-rate of 94.7 %.
Using the proposed method, we can estimate and use LEDs,
even when they are not transmitting their IDs.
On the practical VLC platform however, accurate IDestimation for several LEDs is problematic, due to the low
resolution of the image-sensor. There is some degree of error in
the correspondence between the image-plane and the fish-eye
coordinates, for reasons described in section 5.A.1.
b) HD camera.
Fig. 13.
Enlargement.
2) Moving experiments
With 2 or more IDs detected beforehand, unidentified LEDs,
as well as some that failed to be tracked, could be estimated,
while the camera was in motion. From this, location was
determined, even though some LEDs were offline.
C. Signal attenuation
The camera was unable to receive accurate signals on the
VLC platform, because the light-signal was attenuated. Possible
solutions for this problem: a) raise the camera to a higher
position, b) increase the LEDs’ modulation ratio, or c) use a
camera of higher sensitivity. The simplest of the three is a). With
the basic setup, the camera is tested, with light signals coming
from a 1 m high ceiling. Thus it is considered that a camera
located at about 2 meters height on the VLC platform could
receive the signals. Though option b) is also possible, LED
flicker becomes perceptible for some users. In the future, when
cameras become more sensitive, discriminating brightness and
decoding information will not pose a problem.
2014 International Conference on Indoor Positioning and Indoor Navigation, 27th-30th October 2014
VI.
CONCLUSION
We proposed a VLC-based ID-estimation and LED-tracking
method for use in indoor positioning contexts. We conducted
experiments in 2 different environments, to demonstrate that the
identities and positions of as many as 13 LEDs can be estimated,
simultaneously, from the positions of just 2 other LEDs with
known IDs. Additionally, using LED-tracking, positioning error
was less than 10cm and average azimuth error was roughly 1
degree, on the practical VLC platform, even when the camera is
in motion. Therefore, our proposed VLC-based localization
method can be used by moving robots or the visually-impaired
who need precise location and direction information for indoor
navigation. Since transmitters and receivers work independently,
more than one user can obtain positioning information at a time,
with fewer transmitters. Our next task will involve a series of
simulations designed to clarify which arrangement is best-suited
to the task of transmitting LEDs’ identities.
ACKNOWLEDGMENT
This research was partially supported by the Strategic
Information and Communications R&D Promotion Program,
Ministry of the Internal Affairs and Communications of Japan,
and Grants-in-Aid for Scientific Research (B 24300199), Japan
Society for the Promotion of Science (JSPS).
REFERENCES
[1] S. Haruyama, "Visible light communication," The Journal of the Institute
of Electronics, Information and Communication Engineers, pp. 10551059, 2011-12.
[2] M. Nakajima, and S. Haruyama, "New indoor navigation system for
visually impaired people using visible light communication," EURASIP
Journal on Wireless Communications and Networking 2013.
[3] B. Sohn, J. Lee, H. Chae, and W. Yu, "Localization system for mobile
robot using wireless communication with IR landmark," RoboComm '07
Proceedings of the 1st international conference on Robot communication
and coordination, No.6, 2007.
[4] Y. Nakazawa, H. Makino, K. Nishimori, D. Wakatsuki and H. Komagata,
"Indoor Positioning Using a High-Speed, Fish-Eye Lens-Equipped
Camera in Visible Light Communication," Proceedings of the
International Conference on Indoor Positioning and Indoor Navigation,
pp.151-158, October 2013.
[5] “Ubiquitous Code : ucode,” T-Engine Forum Ubiquitous ID Center
Document, WG930-S101-1.A0.10, 2009.
[6] AV & IT Systems, JEITA CP-1223 Visible Light Beacon System, Japan
Electronics and Information Technology Industries Association, 201305.
[7] B. Lucas and T. Kanade, "An iterative image registration technique with
an application to stereo vision," In Proc. Seventh International
Conference on Artificial Intelligence, pp. 674-479, 1981.
[8] J. Y. Bouguet, " Pyramidal Implementation of the Lucas Kanade Feature
Tracker," Technical Report, Intel Corporation, Microprocessor Research
Labs, 2000.
[9] D. W. Marquardt, "An algorithm for least-squares estimation of
nonlinear parameters," SIAM J. Appl. Math. vol. 11, pp. 431-441, 1963.
[10] H. Komagata, I. Ishii, A. Takahashi, D. Wakatsuki, and H. Imai, "A
geometric calibration method of internal camera parameter for fish-eye
lenses," The Transactions of the Institute of Electronics, Information and
Communication Engineers D (IEICE Trans. D), vol. J89-D(1), pp. 64-73,
2006-01-01.
[11] H. Komagata, I. Ishii, H. Makino, A. Takahashi, D. Wakatsuki, and H.
Imai, "Fish-eye camera calibration using intensity slant patterns," The
Transactions of the Institute of Electronics, Information and
Communication Engineers D (IEICE Trans. D), vol. J93-D(5), pp. 621631, 2010-05-01.
[12] A. Kupper, "Location-Based Services : Fundamentals and Operation,"
2005.
[13] K. W. Kolodziej and J. Hjelm, "Local Positioning Systems: LBS
Applications and Services," 2006.