improved driver
[RBC.git] / readme.txt
1 ***Random Ball Cover (RBC) v0.2***
2 Lawrence Cayton
3 lcayton@tuebingen.mpg.de
4
5 (C) Copyright 2010, Lawrence Cayton [lcayton@tuebingen.mpg.de]
6
7 This program is free software: you can redistribute it and/or modify
8 it under the terms of the GNU General Public License as published by
9 the Free Software Foundation, either version 3 of the License, or
10 (at your option) any later version.
11
12 This program is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 GNU General Public License for more details.
16
17 You should have received a copy of the GNU General Public License
18 along with this program. If not, see <http://www.gnu.org/licenses/>.
19
20 ---------------------------------------------------------------------
21 SUMMARY
22
23 This is a C and CUDA implementation of the Random Ball Cover data
24 structure for nearest neighbor search described in
25
26 L. Cayton, A nearest neighbor data structure for graphics hardware.
27 ADMS, 2010.
28
29 L. Cayton, Accelerating nearest neighbor search on manycore systems.
30 Submitted.
31
32
33 ---------------------------------------------------------------------
34 FILES
35
36 * brute.{h,cu} -- implementation of brute force search (CPU and GPU
37 versions)
38 * defs.h -- definitions of constants and macros, including the
39 distance metric.
40 * driver.cu -- example code for using the RBC data structure.
41 * kernels.{h,cu} -- implementation of all the (device) kernel functions,
42 except those related to the scan (see sKernels below)
43 * kernelWrap.{h,cu} -- CPU wrapper code around the kernels.
44 * rbc.{h,cu} -- the core of the RBC data structure. Includes the
45 implementation of build and search algorithms.
46 * sKernel.{h,cu} -- implementation of the kernel functions related to
47 the parallel scan algorithm (used within the build method).
48 * sKernelWrap.{h,cu} -- wrappers for the kernels in sKernel.
49 * utils.{h,cu} -- misc utilities used in the code.
50 * utilsGPU.{h,cu} -- misc utilities related to the GPU.
51
52
53 ---------------------------------------------------------------------
54 COMPILATION
55
56 Type make in a shell. Requires GCC and NVCC (CUDA). The code has
57 been tested under GCC 4.4 and CUDA 3.1.
58
59
60 ---------------------------------------------------------------------
61 USE
62
63 To use the RBC data structure, you will likely need to integrate this
64 code into your own. The driver.cu file provides an example of how to
65 use the RBC implementation. To try it out, type
66 >testRBC
67 at the prompt and a list of options will be displayed. Currently, the
68 test program assumes that the input is a single binary file, which it
69 then splits into queries and a the database randomly. Clearly, such a
70 setup is only useful for testing the performance of the data
71 structure. To use the data structure in a more useful fashion, you
72 may wish to call the readData function on separate files. There is
73 also a readDataText function in the driver.cu for your convenience.
74
75 The core of the implementation is in rbc.cu and in the kernel files.
76 There is a buildRBC function, a queryRBC function, and a kqueryRBC
77 function, which together should suffice for basic use of the data
78 structure.
79
80 Currently, the kernel functions are reasonably optimized, but can be
81 improved. Indeed, the results appearing in the ADMS paper came from a
82 slightly more optimized version than this one.
83
84
85 ---------------------------------------------------------------------
86 MISC NOTES ON THE CODE
87
88 * The code currently computes distance using the L_1 (manhattan)
89 metric. If you wish to use a different notion of distance, you must
90 modify defs.h. It is quite simple to switch to any metric that
91 operates alongs the coordinates independently (eg, any L_p metric),
92 but more complex metrics will require some aditional work. The L_2
93 metric (standard Euclidean distance) is already defined in defs.h.
94
95 * The k-NN code is currently hard-coded for k=32. It is hard-coded
96 because it uses a manually implemented sorting network. This design
97 allows all sorting to take place in on-chip (shared) memory, and is
98 highly efficient. Note that the NNs are returned in sorted order,
99 so that if one wants only, say, 5 NNs, one can simply ignore the
100 last 27 returned indices. For k>32, contact the author.
101
102 * The code requires that the entire DB and query set fit into the
103 device memory.
104
105 * For the most part, device variables (ie arrays residing on the GPU)
106 begin with a lowercase d. For example, the device version of the
107 DB variable x is dx.
108
109 * The computePlan code is a bit more complex than is needed for the
110 version of the RBC search algorithm described in the paper. The
111 search algorithm described in the paper has two steps: (1) Find the
112 closest representative to the query. (2) Explore the points owned
113 by that representative (ie the s-closest points to the representative
114 in the DB). The computePlan code is more complex to make it easy
115 to try out other options. For example, one could search the points
116 owned by the *two* closest representatives to the query instead. This
117 would require only minor changes to the code, though is currently
118 untested.
119
120 * Currently the software works in single precision. If you wish to
121 switch to double precision, you must edit the defs.h file. Simply
122 uncomment the lines
123
124 typedef double real;
125 #define MAX_REAL DBL_MAX
126
127 and comment out the lines
128
129 typedef float real;
130 #define MAX_REAL FLT_MAX
131
132 Then, you must do a
133
134 make clean
135
136 followed by another make.
137
138 * This software has been tested on the following graphics cards:
139 NVIDIA GTX 285
140 NVIDIA Tesla c2050.
141
142 * This sotware has been tested under the following software setup:
143 Ubuntu 10.04 (linux)
144 gcc 4.4
145 cuda 3.1
146
147 Please share your experience getting it to work under Windows and
148 Mac OSX!
149
150 * If you are running this code on a GPU which is also driving your
151 display: A well-known issue with CUDA code in this situation is that
152 a process within the operating system will automatically kill
153 kernels that have been running for more than 5-10 seconds or so.
154 You can get around this in Linux by switching out of X-Windows (often
155 CTRL-ALT-F1 does the trick) and running the code directly from the
156 terminal.