Main Page | Modules | Class Hierarchy | Class List | Class Members | Examples

The ARNL Localization and Path Planning System

1.5.1

MobileRobots Advanced Robotics Navigation and Localization System (ARNL) Developer's Reference Manual

Copyright (c) 2004, 2005 ActivMedia Robotics, LLC.
Copyright (c) 2006, 2007 MobileRobots Inc.
All rights reserved.

Introduction to ARNL

ARNL is MobileRobots Inc.'s software development library for localizing and intelligently navigating mobile robots within mapped indoor environments. ARNL comes self-contained, complete with its own ARIA (Advanced Robotics Interface for Applications) development environment with header and library files, plus the ArNetworking networking system with which ARNL is integrated for easy visualization and control by remote clients. Accordingly, when using ARNL, you do not need to also include or link to a seperate ARIA installation.

This and related ARNL documents describe how to program and operate your ARIA-controlled MobileRobots or ActivMedia mobile robot or simulator with ARNL. To best understand ARNL's features and functions, please also follow along with the example source files found in the Arnl/examples directory, as well as with the more detailed class and method definitions found in accompanying documents.

There are two ARNL libraries: "ARNL" uses data obtained either from a scanning laser range finder or integrated sonar sensors to localize and to detect obstacles, and is part of the Laser Navigation accessory package. The library and package named "SONARNL" is a version of ARNL that includes only sonar navigation and localization, but may be used on any MobileRobots platform. This documentation describes both ARNL and SONARNL.

ARNL uses the range sensor information together with robot odometry data, and a prepared environment map. Maps may include designated forbidden areas, goals and a "home" poses. You can build maps using the Mapper3 (tm) application from MobileRobots.

Important note regarding required configuration changes for Arnl 1.5

It is _VERY_ important to note that ARNL 1.5 drives and path plans very differently (and much better!) than older versions, but that you may need to adjust some parameters to get this to happen. If you use old parameters the path planning may actually be worse but will at least be much much slower.

The new version follows the path much more closely than older versions. This version uses all of the movement parameters to determine how fast it needs to go at each point in the path. This means it uses the rotVelMax, rotAccel, rotDecel, transAccel, and transDecel to see how fast it can go around a corner, and when it has to start decelerating to make the turn.

You can adjust the 'Driving*' parameters in the 'Path Planning settings' section, that will change the parameters in such a way that they will only apply when ARNL is path planning (so it won't affect teleop, or other driving you do with the robot). This involves sending those commands down to the robot whenever the path planning takes over, so if you aren't connected directly from the computer this might cause some problems. You can set these same parameters in the new 'Robot config' parameter section and they will only be sent down once, but they will apply to all driving the robot does. If you leave those parameters at their defaults of 0 the value used comes from the firmware. If you want to adjust those or see what that is you can use the configuration program for the microcontroller (ARCOScf, AROScf, p2oscf, etc).

Some values we've used are a rotVelMax of 250, rotAccel of 300, rotDecel of 300, transAccel of 600, transDecel of 600. This isn't appropriate for all robots or situations, but should give you an idea of the range of values, and that the higher you set them the better and faster ARNL will drive. You should obviously find values appropriate to your robot, your robot's payload, your robot's environment, your robot's environment, and your robot's task. Note that for some of these you may need to change the *Top value with the configuration program for the microcontroller (some old robots shipped with low Top values). Also note that you that the gyros that we use will only give accurate readings until 300 degrees/sec, so your rotVelMax should never go over that value, and probably shouldn't come close. It is okay if the rotAccel or rotDecel is higher.

If you are using ARCOS you should get the latest version (1.9), since there was a bug in previous versions where rotDecel and rotAccel (or transAccel and transDecel) sent in the same 5 millisecond period would cause the first one to be overwritten which causes problems (especially since the software's idea of how the robot was driving didn't match the robot's idea).

Architecture and Operation

ARNL performs two related tasks, localization and navigation (by planning and following a path). Each of these tasks is performed in its own thread in the background, while your main program continues to execute.

For more information about localization by laser, see Localization. For more information about localization by sonar, see Sonar Localization. For more information about path planning and path following, see PathPlanning.

We've included a separate ARIA-based networking layer, too, so that you may operate the ARNL-enabled robot over a TCP/IP-based network. The respective class libraries each contain all the methods your application needs to initiate, operate, get status, and terminate the threads. Please consult their separate documentation for details.

To program with ARNL, simply include the Arnl/Aria.h, Arnl/Arnl.h and, if needed, Arnl/ArNetworking.h header files with your source code and link it with the respective libArnl, libAriaForArnl, and libAriaNetworkingForArnl libraries in Arnl/lib. Demonstration programs in the Arnl/examples directory show how to program simple, interactive path planning systems using these libraries.

In your ARNL-enabled ARIA applications, you need to include at least three objects to enable localization: pointers to the ArRobot that hold the robot details, ArSick holding the range finder and the ArMap object holding the details of the map.

For path planning and obstacle avoidance, you need four objects: ArRobot holding the robot object, ArSick holding the laser object, ArRangeDevice holding the sonar object, and ArMap holding the map details.

Once the localization and path-planning threads are initialized and enabled, the application uses default parameters to run the robot. To change your robot's characteristics and behaviors, including those related to localization and path planning, reset these parameters in the respective modules through the various ARIA and ARNL methods or by using a customized ARIA configuration file, such as found in params/arnl.p, which gets read upon execution of the application.

Operating ARNL

To operate an ARNL-enabled application, you first physically position the robot (or simulated robot) at some known position and pose (map-embedded home position, or the starting 0,0,0 location of the map) so that the robot starts up localized. Thereafter, the localization task (ArLocalizationTask or ArSonarLocalizationTask) keeps the robot localized in the map using the laser range finder's and robot's odometry data.

Send the robot autonomously, safely and efficiently to goal locations in the map using the ArPathPlanningTask thread. Use the ArNetworking server thread to interact with the ARNL system over a network using TCP/IP protocols, such as with the MobileEyes(tm) GUI, for example. See the example server program Arnl/examples/guiServer.cpp for details.

For Localization, ARNL uses the Monte Carlo localization algorithm to accurately localize the robot within a given map of its operating environment given the robot's odometry and current range sensor measurements. For path planning, ARNL uses a grid based search to compute an optimal path from the robot's current pose in the map to a given goal pose in the map. This basic path is then locally revised and smoothed to avoid local obstacles. ARNL then uses the dynamic window approach to compute the robot's linear and rotational velocities to enable it to faithfully track the desired path.

Making maps

To localize and navigate, ARNL needs a map of the robot's operating environment. Maps may also contain logical features such as goal points, forbidden areas, and more.

To learn more about creating a map, see the documents Mapping.txt and SonarMapping.txt, the ARNL or SonARNL manuals (copies available at http://robots.mobilerobots.com/), and documentation for Mapper3.

Advanced Features of ARNL

In addition to the basic localization and navigation functions, ARNL has advanced features which enable the robot to localize and navigate in a more specialized manner. Some of these are as follows.

  1. One way corridors: These allow the user to restrict the direction of the robot in certain areas of the map to be such that the dot product of the robot motion vector and the direction of the area it is in is always greater than 0. The one way areas in the map will be defined as such in the given map. Note that the use of one way areas may cause otherwise accessible goals to become inaccessible or be reachable only in circuitous paths.

  2. Re-plan paths using global range devices: These allow the user to setup the robot to use a ArGlobalReplanningRangeDevice which can hold certain range data in memory. This will allow the robot to replan a path when it finds the current path to goal is blocked.

  3. Sectors: These are designated areas in the map which can be used to change the robot driving parameters when the robot enters the sector. These can be used to slow down in certain sectors and speed up in some other sectors.

  4. Reflectors: The SICK laser range finder can not only sense the range of an object but also compute its reflectivity. If the robot environment is marked with special reflective tape (manufactured by SICK), the SICK can detect it. ARNL can use the known position of the landmarks on the map and the reflectors it sees in its current laser view to compute the pose of the robot. To achieve this, ARNL uses an extended Kalman filter formulation to fuse the pose from the MCL localization, the movement from the encoders and the reflections from the laser. If enabled, ARNL can also compute the pose using a closed form solution of the landmark equations as long as it can compute a set of matched reflections with the reflectors on the map with at least 2 matches. This triangulation computation is also invoked when the regular MCL localization fails and there are more than one reflectors in view. This advanced feature was added to ARNL to enable the robot to localize even in locations such as a factory where the environment varies so widely that the robot loses localization using the regular MCL localization. To use reflectors, the map used must contain the reflector data, and the user must enable the reflector bits for the SICK laser by including the following command line arguments: -laserReflectorBits 3ref -laserUnits 1cm' (For example, guiServer -map reflectors.map -laserReflectorBits 3ref -laserUnits 1cm).

  5. Restrictive Boundary and Sectors: These are areas and lines in the map where the cost to traverse is higher than normal cells. These constructs can be used to force the robot to plan around unless there is no alternative.

    The user can add one way areas and sectors to a map using MobileRobots' Mapper3 software. Advanced features will be incrementally added to ARNL as and when needed. To get the reflector localization to work, you will need to use a map created with the Mapper3 software from MobileRobots. Please refer to the Mapper documentation to create a map and set it to display the reflectors on MobileEyes.

  6. Inter-robot communication and mutual avoidance (not available in SONARNL). The multi-robot networking services allow robots to communicate their positions and path data either in a peer-to-peer fashion, or through a central server, to better avoid each other in a multiple-robot application. To use this feature, you must create an ArServerHandlerMultiRobotPeer or an ArServerHandlerMultiRobot object and configure relevant parameters. See the ArServerHandlerMultiRobotPeer and ArServerHandlerMultiRobot documentation for details.

Documentation and Coding Convention

For clarity while you read this technical document, we follow common C++ coding conventions:
  1. Class names begin with a capital letter.
  2. Enums either begin with a capital letter or are all in caps.
  3. Use enums or constant variables instead of preprocessor macros.
  4. Member variables in classes are prefixed with 'my'.
  5. Static variables in classes are prefixed with 'our'.
  6. Member function names start with a lower case.
  7. Capitalize each word except the first one in a name; likeThisForExample.
  8. Write all code so that it can be used threaded.

Customizing ARNL

The actual motion of the robot when following a path depends on a number of factors which are as follows.
  1. Low level robot control parameters contained in ARIA such as the maximum allowable linear acceleration, its maximum allowable rotation accelerations etc as set in ARIA's robot parameter files p3dx.p when connected to a robot of type "p3dx".
  2. The localization and path planning parameters used by ARNL such as the maximum linear and rotational velocities set in the configuration.
  3. The degree of crowdedness of the actual environment the robot is moving in.

Many parameters allow the user to customize the behavior of the robot according to the robot's environment. These parameters are obtained by loading a parameter file using ArConfig. By default, ARNL will load parameters from arnl.p and SONARNL will load parameters from sonarnl.p. For example, if the environment has few unmapped obstacles and is relatively fixed, the user can get away with a lower CollisionDistance parameter which will reduce the collision computation. If the robot has to navigate through narrow passages like doors, the PlanRes can be decreased to allow for more resolution. Changing some of these parameters can result in unintended consequences due to the limited resources such as the speed of the on board computer and memory. For example reducing the PlanRes by a factor will cause path planning computations to increase by the square of the factor. Planning and obstacle avoidance can also fail to operate as expected due to parameter values. (Remember, always test your software carefully in a controlled environment.)

The following subsection will list the parameters and their use in arnl.p. The parameters are broadly divided into localization and path planning parts. The default values are noted in paratheses.

The default parameters in arnl.p are meant to drive Pioneer class of robots in office type environments. If you are using a larger and heavier robot such as a powerbot, remember to tune the default parameters to your needs.

Key MCL Localization parameters:

  1. Map: The path to the map filename of the environment we'll use to navigate.
  2. NumSamples: (2000) No of pose samples for MCL. The larger this number, the more computation will localization take. Too low a number will cause the robot to lose localization.
  3. AdjustNumSamplesFlag: (false) The number of samples is by default kept high to keep the robot from losing localization even after initialization. This number can be lowered during motion in places of the map where the localization score is high to reduce the computation load. Set this flag to true if you want to vary the number of samples with the localization score. (As the score drops, the no of samples will rise)").
  4. GridRes: (100) The resolution of the occupancy grid representing the map in mm. Smaller resolution results in more accuracy but more computation.
  5. PassThreshold (0.2) After MCL sensor correction, the sample with the maximum probability will have a score based on the match between sensor and the map points. This is the minimum score out of 1.0 to be considered localized.
  6. PeturbX: (10) After sensor correction and resampling the chosen pose is perturbed to generate a new sample. This parameter decides the range to perturb the X axis in mm.
  7. PeturbY: (10) Same as above for the Y.. This parameter decides the range to perturb at in the Y axis in mm.
  8. PeturbTh: (1) Same as above for the Theta. This parameter decides the range to perturb at in the angle in degs.
  9. FailedX: (300) Range of the box in the X axis in mm to distribute samples after localization fails.
  10. FailedY: (300) Range of the box in the Y axis in mm to distribute samples after localization fails.
  11. FailedTh: (360) Range of the angle in degs to distribute samples after localization fails.
  12. RecoverOnFail: (false) If localization fails, this flag will decide if a static localization is attempted around last known robot pose. Such a reinitialization can cause the robot to be hopelessly lost if the actual robot is very different from its last known pose.
  13. PeakStdX: (10) If the standard deviation of all the X coords of the samples lie within this number then the localization is considered to have a unique solution. If not, the localization is considered to have multiple solutions.
  14. PeakStdY: (10) Same as above but pertains to the Y axis in mm.
  15. PeakStdTh: (1) Same as above but pertains to the Angle in degrees.
  16. PeakFactor: (0.000001) When a number of distinct samples poses have non zero probabilities such as when there are ambiguities along the center of a long corridor. This is the threshold below the maximum probability to be considered a valid hypothesis.
  17. StdX: (200) The standard deviation of the Gaussian ellipse when localization is initialized in a map.
  18. StdY: (200) Same as above but pertains to the Y axis in mm.
  19. StdTh: (30) Same as above but pertains to the Angle in degrees.
  20. KMmPerMm: (0.2) When the robot moves linearly, the error in distance is proportional to the distance moved. This error is is given as a fraction in mm per mm
  21. KDegPerDeg (0.2) When the robot rotates, the error in the angle is proportional to the angle turned. This is expressed as a fraction in degs per deg.
  22. KDegPerMm (0.01) When the robot moves linearly it can also affect its orientation. This drift can be expressed as a fraction in degs per mm.
  23. TriggerDistance: (200) Since MCL localization is computationally expensive, it is triggered only when the robot has moved this far in mm.
  24. TriggerAngle: (5) Same as above but now it triggers when the robot has turned this far in degs.
  25. TriggerTime (0) Since MCL localization is computationally expensive, it is triggered only when the robot has been idle for this long in milliseconds. (A value of 0 will disable the feature)
  26. TriggerTimeX (100) When localization is triggered by time this parameter decides the range of the samples in X coords in mm.
  27. TriggerTimeY (100) When localization is triggered by time this parameter decides the range of the samples in Y coords in mm.
  28. TriggerTimeTh (180) When localization is triggered by time this parameter decides the range of the samples in Theta coords in degs.
  29. SensorBelief: (0.9) Probability that a range reading from the laser is valid. This is used in the correction of the probabilities of the samples using the sensor.
  30. ObsThreshold: (0.1) The threshold value of the occupancy grid to consider as occupied.
  31. AngleIncrement: (0) The localization will use only one out of every this many degrees in the laser. The lower limit is decided by the LaserIncrement setting.
  32. DiscardThreshold: (0.33) A robot sample pose lying inside an occupancy grid cell with a value above this will be usually discarded. This is useful in cases where robot may intersect map points such as during patrolbot docking.
  33. FuseAllSensors (true) ARNL uses a Kalman filter which allows you to combine the data from the MCL localization, the movement from the encoder between cycles and reflectors if mapped and seen by the laser. This advanced feature can be disabled to revert to the basic MCL localization, using this flag.
  34. ReflectorVariance (10000.0) This number will be used as the variance of the (x, y) coords of the center of the reflectors in the R matrix of the Kalman filter.
  35. Qxx (100.0) This is the first element of the diagonal covariance matrix which will define the error in the Kalman model for the X axis.
  36. Qyy (100.0) This is the second element of the diagonal covariance matrix which will define the error in the Kalman model for the Y axis.
  37. Qtt (1.0) This is the third element of the diagonal covariance matrix which will define the error in the Kalman model for the Theta axis.
  38. ReflectorMatchDist (1000.0) When finding the closest reflector in the map to an observed reflection, this is the maximum distance the system will search to find the closest reflector.
  39. ReflectorMaxRange (8000.0) This is the maximum distance that the SICK lrf is capable of seeing a reflector. (This is smaller than the max range of the regular SICK readings).
  40. ReflectorMaxAngle (30.0) This is the maximum angle of incidence that the SICK lrf is capable of seeing a reflector at. (This is much smaller than the angle that the regular SICK readings are capable of returning).
  41. UseReflectorCenters (true) The Kalman filter matches the returns from the laser reflectors to the reflectors in the map. When there are more than one ray from one reflector, this flag will bunch the rays into groups and match the center ray with the center of the reflector. If the flag is set to false, the rays are matched to the point on the reflector by line intersection. This involves more computation but may be more accurate.
  42. Triangulate (true) The regular Kalman filter tries to fuse the encoder pose and the data from the reflectors. This is an incremental process which will eventually converge to the right pose within a few iterations depending on the uncetainity models of the sensors. But, when the robot is lost, and when it sees more than one reflector, the pose can be computed directly using a closed form solution. (The key qualifier is that the current pose of the robot is close enough to get the reflectors in the map to match correctly with the corresponding reflections in the sensor).
  43. riangulateScoreThreshold (0.5) When the map of the environment is very close to the actual environment the robot encounters, the triangulation using a few reflectors can actually worsen the localization. This is due to the fact that the uncertainty in combining the few reflectors and their location usually is worse than the uncertainty from using all the laser range values from the MCL. The triangulation will kick in only if the MCL score drops below this value.

Key Path planning parameters:

  1. MaxSpeed: (750) Maximum speed during path following in mm/sec.
  2. MaxRotSpeed: (100) Maximum rotational speed in degs/sec.
  3. GoalDistanceTol: (200) Distance in mm to the goal that will be considered as reached.
  4. GoalAngleTol: (10) Angle in degs to the goal orientation that will be considered as done with orientation.
  5. GoalSpeed: (250) Maximum speed at which end move to goal is executed. (This value and the robot's inertia will decide the actual goal positioning accuracy).
  6. GoalRotSpeed (33.3) Maximum rotational velocity at which end move to goal is executed. This value and the robot's inertia will decide the goal positioning accuracy. (A value of 0 will keep the normal driving value).
  7. GoalTransAccel: (200) Maximum linear acceleration at which end move to goal is executed. (A value of 0 will keep the normal driving value).
  8. GoalTransDecel: (33.3) Maximum linear deceleration at which end move to goal is executed. (A value of 0 will keep the normal driving value).
  9. GoalRotAccel: (200) Maximum rotational acceleration at which end move to goal is executed. (A value of 0 will keep the normal driving value).
  10. GoalRotDecel: (33.3) Maximum rotational deceleration at which end move to goal is executed. (A value of 0 will keep the normal driving value).
  11. GoalSwitchTime: (0.4) Time in secs to switch into end move mode in addition to the coasting time. (To allow for slack in the ramping down of the velocity to zero).
  12. GoalUseEncoder: (true) For fine positioning at goal, the robot can switch to moving based on its encoder pose only. This flag decides this.
  13. GoalOccupiedFailDistance: (1000) When the sensors see that the goal is occupied the robot will still be allowed to drive this close to the goal anyway. This is meant for cases where the robot encounters ramps which show up as obstacles from far away but will need to be traversed to get to the goal.
  14. HeadingRotSpeed: (50) There will be some points on the path where the robot will be exclusively rotating to reach a heading. The user may wish to set the rotational speed in this situation to lower than normal especially when dealing with heavier robots like the Powerbot. (A value of 0 will keep the normal driving value) (This will not override the GoalRotSpeed).
  15. HeadingRotAccel: (50) There will be some points on the path where the robot will be exclusively rotating to reach a heading. The user may wish to set the rotational acceleration in this situation to lower than normal especially when dealing with heavier robots like the Powerbot. (A value of 0 will keep the normal driving value) (This will not override the GoalRotAccel).
  16. HeadingRotDecel: (50) There will be some points on the path where the robot will be exclusively rotating to reach a heading. The user may wish to set the rotational deceleration in this situation to lower than normal especially when dealing with heavier robots like the Powerbot. (A value of 0 will keep the normal driving value) (This will not override the GoalRotDecel).
  17. PlanRes: (106.25) The resolution of the grid used for path planning in mm. It is best to use an integral fraction of the robot half width to allow for more free space. Reduce the PlanRes to allow the robot to traverse through tight spaces such as doorways. Note that decreasing the PlanRes by 2 will increase computation by a factor of 4.
  18. UseLaser: (true) Use the laser for collision avoidance?
  19. UseSonar: (true) Use the sonar for collision avoidance? Due to the sonar inaccuracies the obstacles may appear larger. This flag can be used to set the sonar off when the robot has to navigate through narrow openings such as doors.
  20. PlanFreeSpace: (425*3/2) Preferred distance from side of robot to obstacles. The path planner marks each grid cell depending on its distance to the nearest obstacle. Any cell within half the robot width of an occupied cell will still be non traversable by the robot. The cell at half robot width will be traversable but will have the maximum cost. The cost of the cells beyond half width will progressively decrease. The PlanFreeSpace is the distance from the side of the robot beyond which the cell cost stops decreasing and remains constant. Larger values will cause larger excursions around obstacles, (This variable is related to the FreeSpacing variable in previous versions of ARNL which was measured from the center of the robot).
  21. FrontClearance: (100) Front clearance in mm of the robot while avoiding obstacles.
  22. SlowSpeed: (100) Speed below and at which is considered slow.
  23. SideClearanceAtSlowSpeed: (100) Side clearance in mm of the robot while avoiding obstacles when it is moving at or below the slow speed.
  24. FrontPaddingAtSlowSpeed: (100) Distance in addition to the front clearance of the robot while avoiding obstacles when it is moving at or below the slow speed. In this padding the super max translation deceleration will be allowed to engage to keep the obstacle away before slamming on eStop.
  25. FastSpeed: (1000)Speed at or above which is considered fast.
  26. SideClearanceAtFastSpeed: (1000) Side clearance in mm of the robot while avoiding obstacles when it is moving at or above the fast speed.
  27. FrontPaddingAtFastSpeed: (1000) Distance in addition to the front clearance of the robot while avoiding obstacles when it is moving at or above the fast speed. In this padding the super max translation deceleration will be allowed to engage to keep the obstacle away before slamming on eStop.
  28. SuperMaxTransDecel: (1000) The maximum translational deceleration allowed for avoiding obstacles. This is used in conjunction with the padding parameters to limit the wear and tear if emergency deceleration is unlimited.
  29. CollisionRange: (2000) The distance from the robot within which the obstacles seen by the sensor and those on the map are used to compute the local path. A smaller collision range will result is lesser computation but may increase revision of path if the environment is constantly changing.
  30. HeadingWt: (0.8) Heading weight for DWA. Unlike the heading objective computed from a destination pose on the path, as in the conventional DWA, we use a path matching function to match the arc made from the velocities with the desired computed path.
  31. DistanceWt: (0.1) Distance weight for DWA. Distance refers to the distance to collision if any if the robot continues on the path computed from the velocities.
  32. VelocityWt: (0.1) Velocity weight for DWA. Velocity refers to the linear velocity only.
  33. NforLinVelIncrements: (1) If N is the value of this parameter, the no of linear velocity increments of the search table is 2*N+1 for the DWA.
  34. NforRotVelIncrements: (8) If N is the value of this parameter, the no of rotational velocity increments of the search table is 2*N+1 for the DWA.
  35. SmoothWindow: (2) Smoothing window size for DWA. The sum of the various objectives for each element in the velocity table is far from smooth and a smoothing filter is used over the raw data.
  36. ObsThreshold: (0.1) The threshold value of the occupancy grid to consider as occupied for path planning.
  37. MaxExpansionFactor: (1.0) When the local path planner cannot find a path to a local goal due to an unmapped obstacle, it will look further down the path to an limited extent. If the user wants to force the planner to to look by bigger factor, this variable must be set to a value greater than one.
  38. UseCollisionRangeForPlanning: (false) The robot plans ahead locally for a distance based on its speed. This may not be sufficient in some cases where the user may want it to look ahead till the CollisionRange of the sensors. This flag will enable the user to force the robot to look at least as far as the distance the sensors data is incorporated.
  39. SecsToFail: (8) The time in seconds for which local search will continue trying to replan when it fails to find a safe local path to move and is motionless.
  40. AlignAngle: (10) When the robot is stopped, this is the minimum angle it will rotate in place to align to the planned path before it will move linearly.
  41. AlignSpeed: (10) This is the velocity below which the robot will do the aligning to path direction before linear motion.
  42. SplineDegree: (3) Degree of the B-Splines used to smooth local path.
  43. NumSplinePoints: (5) The number of points that will be used to subdivide the look ahead in the local path which will then serve as the knots for the spline to form over.
  44. OneWayCost: (10) Cost of going wrong way on the one way areas. A high value is recommended for maps with long passages which makes loops.
  45. CenterAwayCost: (10) Cost of being away from the central spine of the one way areas. A zero would cause it to not consider centering at all.
  46. Resistance: (2.0) Cost of traversing a cell in restrictive areas and lines. The normal cost for regular non-restrictive cell is 1.
  47. PlanEverytime: (true) Local replanning need to be done only if the current path is obstructed. This will decide if local planning is done every action cycle anyway? Setting it to false will be useful when there are too many unmapped obstacles which cause the robot to constantly flip flop on its way to goal.
  48. ClearOnFail: (true) If this flag is true, any failure in the local path planning will force the cumulative range buffers to be cleared.
  49. UseEStop: (true) If this flag is true, any obstacle in the path which cannot be avoided by regular robot deceleration will cause the software emergency stop to be engaged which will result in an almost instantaneous stop. (May not be suitable for platforms like the wheelchair).
  50. CurvatureSpeedFactor (1.0) ARNL modulates the linear speed of the robot when it is negotiating a curve by using the curvature of the path and the maximum allowable velocities and accelerations (rotation & linear) for that robot. Despite this, it might be necessary to modulate the linear velocity further by this factor for user defined purposes. Increasing it beyond 1.0 will cause it to go slower on the curves and decreasing it to below 1.0 will cause it to go faster (and probably overshoot its desired path).
  51. LocalPathFailDistance (1000.0) When the sensors see that the local path is blocked, the robot will still be allowed to drive this close to the block. This is meant to help in cases where the robot senses false obstacles from afar and stops too soon such as on ramps. (The distance at which the robot actually stops will depend on the velocity of the robot).
  52. MarkOldPathFactor (0.75)When the robot operates in an environment where there are a number of unmapped obstacles, the local path it plans may flip from one side of the obstacle to the other between cycles. To avoid this, ARNL marks the old path by reducing the costs of its cells by this factor. This factor should be a value between 0 and 1.
  53. DrivingTransVelMax, DrivingRotVelMax ...: (0) These seven parameters related to vehicle velocity, acceleration and deceleration allow a user to override the default driving parameters when it is set to a value other than 0.

More Information

For more information on localization or path planning, read the module documentation below or in the "Modules" menu to the left.

See also:
Localization

Sonar Localization

PathPlanning


Generated on Tue Feb 20 10:56:28 2007 for Arnl by  doxygen 1.4.0