Product Review- Dash Cameras with Navigation: The evolution of Man’s need for direction & documentation.

The Evolution of Man’s Need for Direction and Documentation.

All of us have to some degree experienced the increasing reliance on electronic gadgets to help us get from A to B, especially if we don’t know exactly where B is.  Along the way paper maps, TripTiks, and (often as a last resort) calling or stopping and asking for directions, has given way to dependence on electronic navigation units.  These range from apps on smart-phones, to dedicated free standing navigation units such as Garmin and Magellan, to OEM built-in units in our cars.  Each of these options typically has its own strengths and weaknesses.

More recently, at least in the United States, dash cameras have started to come into their own.  Similar to navigation units, some are now OEM equipment built-in cars, while more commonly, they are free standing units.  There are three primary reasons for their increasing popularity: a desire to share images of a car trip, to have a record of driving on a high-performance track or circuit for learning and review, and documentation in the event of an accident or road rage.

In this review, I look at the latest units from two of the key navigation players, Garmin and Magellan, who have combined units housing both navigation and dashcam capabilities.  This is the first of several ongoing reviews I am doing on these units.  Both manufacturers are providing their respective units to me for review.

In theory, there are some real advantages of combining both navigation and dashcams into one unit, not the least of which is fewer wires and a smaller combined footprint resulting in less blockage of view out the windshield.  While we know they do a very good job providing navigation, the key question is how well do these units do in accomplishing both tasks?  To provide a comparison for the image quality, a pure dash cam unit is included.  I am using dash cams from Papago, a leader in the field of after-market dash cams, and one that has proven itself in prior testing I’ve conducted.

Initial test results:

Here are “raw” (no post-shoot software enhancement) still images generated by each unit at virtually the same time.

From the Garmin

 

From the Magellan

IMG170617-152651F

From the Papago

JWDaum (3 of 2)

You can see that all units adequately capture the scene and the license plate on the vehicle directly in front is certainly legible.  They also document the GPS coordinates, time and mph.  The image in the Garmin is slightly less wide than the Magellan, resulting in objects being slightly closer.  The captured colors, while slightly different for each unit, are close enough to be a non-issue.  On close inspection, the Magellan has a slight edge on sharpness of the image and matches the Papago.

One other thing to note is that the Magellan also captures part of its window attachment, as can be seen in the upper left corner of the image.  While there may be a way to configure the attachment component so this doesn’t happen, it wasn’t intuitively apparent.  Both units were placed on the windshield in a manner that replicated the typical location, especially if you were intending to use the navigation function of the unit while driving.  Here is the set up used:

JWDaum (1 of 4)

Another point to note is that both units picked up reflections from the dash interior.  It would have been possible to reduce or minimize these reflections by moving the attachment point on the windshield, however, again, these were placed where a typical driver would place them, so as to easily view the navigation information and also, to ensure that the unit did not block any critical forward vision.

Here is a second example of still shots generated by the respective units (each has a touch button to ‘snap’ a still shot independent of whether the unit is recording video at the time).

From the Garmin:

GRMN0002

From the Magellan

IMG170617-153150F

From the Papago

2017_0517_143138_006-1

The dash reflection is apparent in all units, but not to the point of reducing the value of the documentation.  When I enlarged each of these, you could not only read the license plate of the white car, but also the plate on the silver/gray truck.  As before, the Magellan is slightly sharper than the similar image on the Garmin, but the best image is from the Papago.

While the day time images would be very good for any incident documentation, that was not the case with the still images captured at night.

Here is the Garmin:

GRMN0014

And the Magellan

IMG170617-202512F

And here is the Papago

JWDaum (2 of 2)

No unit was able to effectively compensate for the high dynamic light range between the ambient light and the reflected headlight, rendering it impossible to read the license plate off of the car immediately in front (possibly some post editing software magic would enable the reading of the license plate).

It should be pointed out however, a different vehicle (different type of head lights and different size vehicle) can have a better outcome under night driving situations with these same units.  For example, here is an image from the same Papago S30 in a sports car (the vehicle used in the current tests was a full-sized sedan)

3 S30

You can see that the license is fully legible from this perspective.

Now let’s look at comparative videos.

This first set shows daytime MP4 output and demonstrate how the dash cam could provide documentation in the event of an incident.  Shortly after the respective videos start, you’ll see on the left side of the screen a car start to drift into my lane.  If the car had hit me, or caused me to stop abruptly, the video would document several factors including my speed, the fact I was driving in my lane at the time of the incident, and the car entering into my lane.  This units all have microphones (that you can turn off), that capture potentially supporting evidence like a horn, or turn signal.

First is from the Garmin:

 

Here is the Magellan:

Here is the Papago:

All three units provide reasonable quality videos sufficient to document an incident, should it be necessary.  There are minor differences in the quality of the three units, and noticeable in the Magellan only, is uncorrected image shake.

This next set shows nighttime comparative videos.  As noted in the still shots, you cannot read most license plates resulting from the high dynamic range contrast of the reflective license plate versus the surrounding images.  However, you can easily make out the type of vehicle, the traffic light colors, etc., so if an incident occurred you would be able to document your vehicle’s position within its lane, speed, and right of way.

First is from the Garmin:

Here is the Magellan:

And here is the Papago:

A few words about the navigation function of the two hybrid dash cams.

Both of these companies have been producing nav based units for years and have it down pretty well by this point.  Each has earned its camp of followers.  The directions, visual guidance including automatic map enlargement at pending turns or divides on highways, ease of finding establishments including entertainment, food, gas stations, as well as emergency information such as police stations and hospitals, have greatly improved with the latest iterations of software.  A real plus of these units (in most cases) is the free lifetime map and software updates.   Additionally, the latest units are offering live traffic updates and automatic rerouting.

 

JWDaum (3 of 4)

Both offer routing with similar options, on-the-go quick course recalculating, and reasonable good audible call outs of directions.  Similarly, each has updatable points-of-interest (entertainment, food, gas, etc.).  However, one big difference is that the Garmin allows either manual input of address location or voice command input, whereas Magellan only has manual input.  This is an important difference, both from a convenience and safety perspective.  It is much easier to use the voice command (which is pretty good in terms of recognition) in my opinion even when not driving, and critical to have when you are driving.

Conclusions and recommendations

If your car has a built-in navigation system and you are satisfied with it, then there probably isn’t much logic in getting a combination nav and dash cam unit.  However, with the increase in red-light runners, distracted drivers, and the like, I highly recommend adding a dash cam to your vehicles.  If that is your inclination, the Papago units are worth considering.  They have some of the best cameras and reliability of ones I have tested.  They have differing levels of bells and whistles, so you will need to explore and find the one that suits your needs.  Please see details at the end of this review for highlights across the Papago units.

If your vehicle has a OEM nav system that you are less than satisfied with (don’t like having to buy expensive map updates, or its a complex process to input an address, etc.) or lacks one completely, then I’d recommend considering the hybrid Garmin line.  While the Magellan was certainly capable, the fact that it currently does not include the ability to accept voice commands takes it out of contention.

One additional plus of an aftermarket unit that combines nav and a dash cam, is that you can easily transfer it from vehicle to vehicle, if you have more than one, and also take it with you when you travel to use in your rental vehicle.

Final thoughts for future improvements on combo-dash cam/nav and dash cam only units:

I would like to see a larger rechargeable built in battery so that the unit could turn on and record a bump or impact while the vehicle was parked (and powered off).  Most vehicles today have their accessory outlets power down shortly after the vehicle is turned off rendering these units ineffective.  Even if you have an accessory outlet that remains live when your vehicle is powered down, you probably don’t want a dash cam potentially draining down your battery.  A rechargeable independent power supply in the unit would get around this. Since such an occurrence would hopefully be rare, the battery would need to have perhaps a 15 or so minutes reserve for practical purposes.

Many units come with ‘driver assistance safety features’ such as the ability to alert you that the car in front has started moving (for example after stopping at a light or stop sign), a reminder to turn on your head lights at dusk, driver fatigue alarm, forward collision warning, lane departure warning, and in the ability to recognize and warn you of an approaching stop sign.  Personally, I found these more of a distractor than a safety feature and turned them all off, except the stop sign recognition.  It will beep and turn on/show a picture of a stop sign on the rear display as you come up to it (even if you have the display turned off as I did).  However, in the units I tested, it failed to recognize at least half of the stop signs I encountered.  I’d rather see improvements in dynamic range, reduction of dash glare, and making quick attachment and removal (leaving the windshield component in place) a priority, and losing the driver assistance features.

Many of these units come with a hardwired 12-volt accessory plug.  Many vehicles today don’t have multiple accessory plugs (at least up by the driver).  Often drivers are already using the sole accessory outlet to power a radar detector or charger for their phone.  It would make more sense to have these units power off of a USB connection, since cars typically have several of those.

Additional detail on each unit tested:

Garmin Drive Assist 51 LMT-S

JWDaum (35 of 7)

Pros:

  • Very easy to set up
  • Voice command works well
  • Easy to read with a quick glance
  • has “live traffic”
  • has WiFi built in for updates
  • can be paired to your Garmin smartwatch
  • Incident Notification: When the unit detects an incident, the device can send an automated text message to a designated contact in 60-seconds. The message is sent from a third-party service, not from your phone, and includes your name and a pre-selected message. If you wish to cancel the notification, you can cancel it within the 60-second window. Incident Notification requires a connection to Smartphone Link, an active mobile data connection, and can be disabled if desired.
  • Travelapse: The Travelapse feature captures video frames at a set interval (one frame for each mile you travel, for example), and creates a fast-motion video of your trip. The device sets the distance interval automatically, based on the length of the route and the space available on the memory card. The unit continues to record regular dash cam video while recording a Travelapse video.
  • The “Where Am I?” feature gives you instant access to important information in case of an emergency. When you touch the vehicle icon on the map screen, the “Where Am I?” feature provides the coordinates (including elevation) of your current position, plus the nearest address and intersections. There are also buttons to help you locate the nearest hospitals, police stations, and gas stations. You can also save the location for future reference.

Cons:

  • When mounted where you would normally mount to have access to routing, the camera catches windshield internal reflections.  Would like to see some form of lens shield to prevent this.
  • Would like a quick release from windshield mount that doesn’t change the unit’s position on the windshield (so you don’t have to re-align camera). Some other Garmin non-dash cam units have a magnetic mount from the unit to the windshield so you can leave the mount and easily pull off the unit without potentially altering the alignment.

Magellan RoadMate 6630T-L

Copyright JwDaum-1

Pros:

  • The unit comes boxed with a Quick Start Guide, 8gb micro card and reader, the components to attach it to your windshield (effective suction cup), and a 12 volt accessory plug and mated USB power cable.
  • You will want to set the unit up at your home, so that you can log onto your WIFI for the normal updates to the maps and software.
  • Once set up, the unit is pretty intuitive and easy to use. However, there is no voice command for inputting addresses, you have to manually use the touch screen

Cons:

  • No voice command interface.
  • When mounted where you would to be able to use map/routing, the camera catches its own mounting bracket (can be seen in the upper left part of videos) .
  • (minor) The unit has a red led power light on the upper left side of the front. This is OK during the day, but an annoyance during the night.
  • Static cover on unit tells you to charge it for 2-4 hours before using, but unit only comes with a 12 volt accessory plug and mated USB cable. The included Quick Start Guide does not mention that you can do this using a power pack like Go Puck©, or your computer USB (however, using the computer will be a relatively slow charge), or by using a AC adapter (not supplied).  The full downloadable User Manual does mention that you can use an AC adapter.  I placed a call to customer support and quickly got through.  The very professional tech said you can us power packs, computers (again, noting that it will be slower) or AC adaptors in addition to the 12 volt accessory plug supplied with the unit.
  • Voice sounds tinny. Though this unit may have had a cracked board, since it also randomly lost power.
  • Doesn’t appear to have any image stabilization software, note shake in images

Papago

JWDaum (36 of 7)

As noted earlier, Papago has a variety of dash cams with somewhat different options.  All are very good dash cams, so the decision as to which one is best depends on your needs.  Tested and pictured above include the 760, 520, 30G and S30.

All units include lifetime update on software, removable microSD memory cards, a set of driver alerts such as stop sign recognition, shock/impact auto save recording, etc.  Also all units operate properly in temperature ranges from 14 degrees up to 149 degrees.  This upper range is impressive, since many competitive dash cams malfunction in the higher temperature ranges (of a windshield fully exposed to the sun).

Here are the highlights of the 4 units tested:

GoSafe 760: This is one of their more advanced multi-purpose units.  It comes with a forward facing wide140 degrees F2.0 camera in the main unit and a separate rear (or side) facing 120 degrees F2.4 camera.  It also has connections for their optional GPS antenna (for adding GPS coordinates overlay to recorded images) and their optional tire pressure monitoring D10E unit.  This unit would probably be best suited for an older car which doesn’t have built-in tire pressure monitoring, and/or individuals who want to document both forward and rear (or side) views simultaneously.

While this offers an impressive array of options, one thing to consider is all of the options and the additional camera require physical connections to the main unit.  So, if you are using the rear camera, GPS and tire monitoring systems, you will have four sets of cords attaching to the main unit.  Unless you plan on trying to tuck some of these in the header or elsewhere, that is going to be pretty messy in your vehicle.

GoSafe 520: This unit has their widest lens at 146 degrees, F2.  It also offers the highest quality at 2K 21:9 ratio videos.  It does not have built-in GPS, nor offer the option of using their GPS or TPMS accessories like the 760 does.  This unit would be ideal if you want to capture your driving trip, track experience, etc. to share with others.  It also, of course, will provide excellent traffic incident pictures.

GoSafe S30: This unit has 135 degrees capture lens with an F1.9 sensitivity.  It also offers the option of using their GPS and tire pressure monitoring systems.  It offers a very small ‘foot print’ on your windshield and is unobtrusive, unless you opt to attach the GPS and/or TPMS.  Then, similar to 760, you are going to have two or three sets of wire connections to this small unit.

GoSafe 30G: This is one of Papago’s latest units and has a wide 140 degrees F1.9 lens.  It has the GPS built in, but does offer the option of attaching the TPMS.  If your vehicle already has a decent navigation system, and TPMS, then this would be the unit to consider.  It is relatively small, would have just the power cord (unless you added the TPMS) and produces high quality videos, time, date and GPS coordinates stamped.

 

InSight© Product Reviews

A bit of background:

P1010884I have always been inquisitive of how things worked, coupled with a high level of mechanical ability.  Over the years, friends have frequently relied on my research and evaluations to help them with product decisions.  Several encouraged me to share my write-ups in the ‘public’ arena.

I am adding a new section to my blog which will include practical product reviews.

No compensation is received for any of my reviews.  When I started this, and published reviews, the items were purchased directly by me. Of late, most products have been provided at no charge by the manufacturer to me for review.

When possible, I try to compare and review comparable products since I think this provides a better benchmark.

If you have questions about one of my reviews, please use the contact form to reach me.  You will also find an area for commenting following each review.  I look forward to hearing from you and hope, where applicable, the information will assist you in making purchasing decisions.

Thanks for stopping by!

 

The first review Dash Cameras with Navigation: The evolution of Man’s need for direction & documentation http://wp.me/p81CBz-99

Autonomous Vehicles: Part 2

This is the second blog on Autonomous vehicles, for the introduction and first part please see Autonomous Vehicles Part 1 .

Autonomous vehicles- the major potential ‘cons’:

Connectivity:

The sine qua non for the CAV (connected autonomous vehicle) is communications.  It is at the same time its strength and, borrowing from Greek mythology, its Achille’s heal.  To function, autonomous vehicles must rely on a tremendous amount of inter and intra connectivity.  All of the on-board sensors (lidar, radar and cameras, engine parameters, lane departure, etc.) have to flawlessly communicate with one another, as well as vehicle to vehicle, and communicate with traffic management (lights, flow, emergency vehicles, etc.).

jdaum-1

Sounds great in theory, but in actuality this is astoundingly difficult to pull off.  Keep in mind, this connectivity has to function flawlessly all of the time.  There was a bit of irony at CES 2017 in that every presentation I attended experienced a problem at least once with the remote presentation control unit communicating correctly with the media controller equipment.  And this was connectivity at its very basic level!  On a more complex level, there was Faraday’s problem during the press review where their car failed to accept the command to self-park.

Obviously, you can’t have a break in connectivity or the autonomous vehicle will come to a complete (unintended) halt (hopefully), and in doing so will become a potential accident instigator for both other autonomous and non-autonomous vehicles.  What level of redundancy will be sufficient to prevent a loss of connectivity?  While it seems feasible that intra- vehicle (between its numerous components necessary to have an autonomous system) redundancy is reasonably surmountable, what will be necessary to ensure the inter-vehicle, and traffic management, along with live web connectivity, is flawless?

Simultaneously with ensuring the continuous flow of connectivity, there are still two large problems to solve: All communication has to be hack proof (we have seen the videos of someone remotely gaining access to a vehicle’s electronics via one of the communication channels, and taking over one or more of the vehicles systems- acceleration, braking, steering.  Hackers have demonstrated this remotely on cars ranging from Jeeps to Teslas.).  Further a great deal, if not all, of the information has to maintain the privacy of the vehicle (and its occupants).

Additionally, complicating the connectivity issue is what was tagged “Babel” at the CES 2017 A United Language for the Connected Car session.  The general definition of babel is a confused noise, typically made by a number of voices.  Unfortunately, it applies to the current status of proprietary software designed for many of the components needed for a connected vehicle.  The herculean challenge is to get a universal open language used across all components/systems for autonomous vehicles.  Beyond the current Babel-of-software-language is the growing quagmire of state and federal regulations aimed at controlling autonomous vehicle access to our roads.  Currently, an autonomous vehicle approved by nascent laws in one state, may not be able to continue driving when it crosses into an adjacent state.  For example, while an autonomous car can be driven in Nevada, it can’t legally continue into nearby Oregon or Idaho, and if you are in an autonomous car in Florida, you could not continue on into any of its adjoining states.

Societal Impact:

The RAND Corporation pointed out in their 2016 publication Autonomous Vehicle Technology: A Guide for Policymakers, that rather than autonomous vehicles reducing congestion on our roads, they may, in fact, increase congestion.  This conclusion is based on the reduced transportation costs borne by individuals.  For example, the cost of automotive insurance shifts from the owner to that of the manufacturer of autonomous vehicles.   This, combined with increased access (potentially no need for individual driver licenses), could see a substantial surge in the number of individuals travelling at the same time.  Of course, it could be moderated by increased reliance on mass vs low occupancy vehicles.  The elimination of the hassle often associated with finding a parking space (your autonomous vehicle could drop you off and then continue on to a remote parking area, awaiting your request for it to comeback and pick you up) may also contribute to a significant increase in willingness to ‘hop’ into your vehicle and head to a dense, high-use, urban area.

What are the implications for the potential loss of transportation sector jobs, their respective incomes and loss of tax revenues from reduced or eliminated parking garages, meters, etc.?

And while most believe that autonomous vehicles (or even semi-autonomous) will significantly reduce the number of deaths caused by crashes, the is one part of our society that has depended on these deaths- that of organ donations.  “It’s morbid, but the truth is that due to limitations on who can contribute transplants, among the most reliable sources for healthy organs and tissues are the more than 35,000 people killed each year on American roads (a number that, after years of falling mortality rates, has recently been trending upward). Currently, 1 in 5 organ donations comes from the victim of a vehicular accident.” [From Future Tense: The Citizen’s Guide To The Future. Dec. 30 2016]  The potential impact is catastrophic on an already stretched organ donation system.  “All of this has led to a widening gap between the number of patients on the organ wait list and the number of people who actually receive transplants. More than 123,000 people in the U.S. are currently in need of an organ, and 18 people die each day waiting, according to the Department of Health & Human Services. Though the wait list has grown each year for the past two decades, the number of transplants per year has held steady in the last decade, at around 28,000.”[ Fortune: If driverless cars save lives, where will we get organs? By Erin Griffith Aug 15, 2014].

Moral Dilemma:

You may be familiar with the paradox of Buridan’s ass.  As the story goes, a hungry donkey was placed equidistant between two identical bales of hay.  Unable to choose which one to go to, the donkey died of starvation.  The movement towards autonomous vehicles has at least two analogous conundrums: how many deaths by autonomous vehicles is an acceptable number of deaths, and, who is going to have the final approval of the algorithms designed to make a decision for an autonomous vehicle as to who should be sacrificed when a choice has to be made between certain death in a pending accident.  The analogy is that if we can’t reach agreement on both of these issues, the movement towards autonomous vehicles may come to a halt.

Even though these two conundrums are inextricably related, let me briefly explore each separately.  We know factually that autonomous vehicles can lower deaths currently associated with driver error, and that the number won’t rapidly be reduced to zero.  Using the approximately 32,000 automotive related deaths per year (cited in my Part 1), what percent reduction would be ‘acceptable’?  Would a 50% reduction resulting in 16,000 fewer deaths per year, but also 16,000 remaining deaths per year by autonomous vehicles be OK?  Would it take a 75% reduction resulting in 8,000 deaths per year by autonomous vehicles to be considered OK?  The consensus appears to be that while the astounding number of 32,000 deaths per year caused by human error behind the wheel, isn’t good, we seem to have ‘accepted’ it without demanding immediate action on a national or global level.  However, few believe we would be as complacent if the news was filled with 16,000 or even 8,000 deaths per year as a result of autonomous vehicles.

Recently a number of articles have appeared highlighting the other conundrum: algorithms being designed to decide who lives and who dies when the outcome of a pending accident is unavoidable.  For example: “A self-driving car carrying a family of four on a rural two-lane highway spots a bouncing ball ahead. As the vehicle approaches a child runs out to retrieve the ball. Should the car risk its passengers’ lives by swerving to the side—where the edge of the road meets a steep cliff? Or should the car continue on its path, ensuring its passengers’ safety at the child’s expense?” [Driverless Cars Will Face Moral Dilemmas by Larry Greenemeier, June 23, 2016, Scientific American] Or:” Imagine you’re behind the wheel when your brakes fail. As you speed toward a crowded crosswalk, you’re confronted with an impossible choice: veer right and mow down a large group of elderly people or veer left into a woman pushing a stroller.” [Driverless cars create moral dilemma. By Matt O’Brien, The Associated Press January18, 2017].  Who should be entrusted with developing and ultimately approving the necessary algorithms?  Shall there be one algorithm for all autonomous vehicles globally or will there have to be country/culturally specific versions?

Real World Impediments To Fully Autonomous Vehicles:

At this point, autonomous vehicle developers have not been able to handle several frequent occurrences typical to our driving environments.  If a fully autonomous car comes upon road construction, it doesn’t know how to ignore the programming that tells it not to cross a double yellow line, or purposely drive into a temporary lane without lane markers.  It is basically programmed to shut down- or, in Nissan’s case, phone ‘home.’  At CES 2017, Carlos Ghosn, Chairman and CEO of Nissan, during his keynote speech said they are planning on having a centralized station staffed 24/7, to handle “edge” circumstances for their autonomous cars.  In logic, the human contacted by the autonomous car would review the information available from the on-board sensors, and map an alternative route or action.  It is unclear how would this approach be able to scale up instantaneously, for example, when a large section of a country has an extreme disrupter such as flooding, earthquake, etc.?

Similarly, autonomous vehicles cannot negotiate a dirt road, or a road that lacks up-to-date gps mapping.  Neal Boudette in his article “5 Things That Give Self-Driving Cars Headaches” points out, autonomous cars will have a very hard time with unpredictable reckless drivers on the same road in a non-connected vehicle [New York Times, June 4, 2016].

Current thinking of many developers, is to require a (human) driver to serve as ‘back-up’ in those circumstances where the autonomous or semi-autonomous vehicle encounters a situation it isn’t programmed to handle.  Unfortunately, there are severe limitations to how well most drivers would be able cope with such an unexpected/instantaneous hand-off (one doesn’t have to look any further than the tremendous increase in accidents attributable to drivers distracted by texting).  The biggest problem is with a lack of sufficient reaction time even at moderate speeds, let alone highway speeds.  This is further complicated by the well documented fact of vigilance decrement.  The longer the autonomous vehicle is properly handling the driving, the less attentiveness and readiness the ‘back-up’ human will have to properly respond to the hand-off.

In order to succeed, there is going to have to be a significant educational effort of the current, and potential, driving public during the transition period when autonomous and semi-autonomous vehicles share the road with traditional non-connected vehicles. Part of this education will need to focus on the trust issue confounded by demographic and age differences in acceptance.

In some ways, many of the concerns today are parallel to those around one of the earliest autonomous vehicles designed to transport people- the elevator.  Original elevators were relatively dangerous vertical transport platforms, operated by a trained elevator operator.  As safety concerns were addressed, elevators vastly improved including having doors, fixed stopping points, redundant mechanisms to prevent free fall, etc.  Shortly after the turn of the twentieth century push buttons were introduced that would permit selecting a specific floor and the elevator to proceed automatically to that floor.  However, it wasn’t until after World War II -forty years after automation- that elevator operators were no longer placed in most elevators.  One of the main reasons for the slow transition from manually operated to fully automated elevators was people were fearful of getting into an elevator that did not have a human operator.  How likely are you to entrust your life to the newest mode of autonomous vehicles?

Autonomous Vehicles Part 3 will explore: What is next?  Is the light at the end of the tunnel daylight or an oncoming train?

CES 2017 Autonomous Vehicles: Part 1

Print

Autonomous Vehicles: Part 1

Probably the best way to start is to use CES’ clever one word campaign that defines CES 2017: Whoa!

Having spent 5+ days trying to take it all in, and by all I mean over 3,800 exhibiting companies, across several Las Vegas resort locations and the LV Convention Center, and is the largest event of its kind, I have to agree Whoa! best describes it.

CES Overview

If you are not familiar with CES, it used to be known as the Consumer Electronics Show, and now is called CES: the global consumer electronics and consumer technology trade show.  While the new official name isn’t as ‘catchy’ as the original one, it is more accurate.  Exhibitors and buyers from 150 plus countries attend, network, and place orders for this year’s hottest tech.

Additionally, it serves as a platform for the top experts in related fields and industries to come to share ideas and learn from one another.  While it is not open to the public, it has a massive attendance- over 175,000 this year, along with a large media presence to get the word out.

CES 2017 covered a broad range of technology and its impact on:

  • Aging and accessibility
  • Cyber security
  • Drones (from micro to those capable of carrying an individual)
  • Enhanced audio and video
  • Gaming, VR and AR (virtual and augmented reality)
  • Health, fitness and wearables
  • The Connected world
  • Sustainable and Eco-friendly tech
  • Vehicle technology
  • Startups
  • Family and lifestyle
  • Content and entertainment
  • Robotics

The focus of this blog is one slice of CES 2017 which, in my opinion, will ultimately impact virtually everyone- that of Autonomous Vehicles.   From my perspective, it truly reflects the synthesis and status of the technology found across most of the areas in the bullet list above.

Introduction: Just what is an autonomous vehicle?

Is it a car that can drive down the road by itself like a Tesla, or one that can park itself like a Toyota, or brake itself to avoid collision like a Cadillac, or is it reserved for something more like depicted in the 1960s series the Jetsons?

Courtesy of Newsday jetsons-flying-car

At CES 2017 there were numerous autonomous vehicles in all shapes and sizes.

And there were even semi-autonomous trucks demonstrating platooning technology, where they are able to travel in a caravan fashion saving fuel and driver effort.

peloton-fuel-savings

SAE has developed the most broadly accepted definition of the levels of driving automation.  As seen on the accompanying chart, they have described five levels ranging from ‘no automation’ through ‘full automation.’  Most important is the transition role (responsibilities) between the human and the ‘system.’  The biggest shift is between levels 2 and 3, where the responsibility for monitoring the driving environment shifts from the person to the system.  The role of the human becomes one of back-up to the automated driving system.  Of course, this shift in responsibility is one of the thorniest and most complex components of the process.

jdaum-123

Autonomous vehicles– the major potential ‘pros’:

Among the top reasons to move towards automation level 3, 4 or ultimately 5, include anticipated significant reductions in vehicular deaths; reduction in congestion; reduction in pollution; facilitated transport of individuals unable to or who should not be driving (too old, infirm, disabled, too young, under medically induced or other impairment).

For example, there are approximately 32,000 automotive related deaths per year.  NHTSA has estimated that between 90 and 94% of those are due to human error.  Further, the economic cost is c. $242 billion and societal harm c $836 billion.  Automated drive systems, whether as low as SAE level 2 on upwards to 5, is expected to significantly reduce deaths due to human error.  Most agree it is reasonable to expect automation to quickly reduce the automotive related deaths easily by half or more.

Damien Riehl (a technology lawyer with a background in legal software design) summed up the critical advantages of the ‘hand-off’ from human to machine: “Computers do not share human drivers’ foibles: They cannot be inebriated, they don’t text, and they don’t fall asleep. Automated-driving systems can also have super-human qualities: 360–degree vision; 100 percent alert time; constant communication with the road, traffic lights, and other cars; “sight” through fog and darkness; and universal, system-wide routing for traffic-flow optimization. Computers react faster: Humans’ reaction time is approximately 1.5 seconds, while computers’ reaction times are measured in milliseconds (and, per Moore’s Law, improving exponentially). [ from the Bench & Bar of Minnesota, the official publication of the Minnesota State Bar Association; Riehl Oct. 4, 2016]

Another significant advantage of moving towards autonomous vehicles comes from the necessary connectivity in each vehicle.  Autonomous vehicles will need to be able to ‘communicate’ with other vehicles on the same road, the environmental variables such as traffic lights, weather, flow, etc.  This critical inter-connectivity will enable aggregated, and most cases, instantaneous learning by the vehicle’s system.  Much like we see today in applications such as WAZE and LIVE, where we as drivers hear of traffic issues, police actions, etc. in near real time, and they can choose to act upon such information, that is learn from it, or ignore it. But of course, the difference is that autonomous vehicles will be programmed with algorithms to instantaneously incorporate the new information and take appropriate corrective actions.  For example, if an autonomous vehicle is driving along a road where there is a traffic accident or construction, it would send the information to other autonomous vehicles further back on the same route, resulting in a seamless rerouting.  This built in collective and incremental learning will mean that the more the autonomous vehicles drive, the more of them on the road with vehicle-to-vehicle and vehicle-to-environs instant and continuous connections, the more efficient and expeditious each will become.

Potential applications abound (many you probably have heard about) including driverless pick up via Uber/Lyft; calling your own car to pick you up and drop you off; driverless public transportation like Olli; platooning of freight hauling trucks, etc.  Autonomous vehicles also open up new modes of transportation, such as the hyperloop.  For example, Hyperloop One is being built north of Las Vegas, Nevada as a proof of concept.  Here in their own words is an explanation: “The Hyperloop is a new way to move people or things anywhere in the world quickly, safely, efficiently, on-demand and with minimal impact to the environment. The system accelerates a passenger or cargo vehicle through a steel tube in a near-vacuum using that linear electric motor. The autonomous vehicles glide comfortably at faster-than-airline speeds over long distances due to the extremely low aerodynamic drag and non-contact levitation. There’s no direct emissions, noise, delay, weather concerns nor pilot error. “[By Bruce Upbin, VP Strategic Communications, Hyperloop One].  Ultimately the vision for hyperloop is to have direct connections (non-stop) between cities, with hubs where either you could drive your car or take an autonomous car to the hub.  At the hub you would drive onto an autonomous platform, be in a small grouping of platforms going to the same location, and be sent out within minutes of your driving onto the platform to your destination, non-stop, at speeds of up to 700+ miles per hour.  At your destination, you would leave the hub and drive or be driven to your objective.

Autonomous vehicles logically could also result in lack of need for personal car (or multiple car) ownership, personal automobile insurance, significant reduction in the need for parking garages in cities, decreased pollution, and increased personal time.

But is it all rosy?

In my Part 2, I will explore Autonomous vehicles- the major potential ‘cons’ https://insight.daumphotography.com/2017/01/25/autonomous-vehicles-part-2/

For a sample of my photographs from CES 2017 please see http://www.daumphotography.com/Events/2017-CES/

Fabulous flowers

Flowers are found almost everywhere.  Typically they add color, and often fragrance, that help them standout in our visual landscapes.

Frequently we just ‘breeze’ by them as we pass from here to there, and as a result miss the stunning nuances they are waiting to share.

Here is a sampling from my travels around the world:

Hope you enjoy!  Please feel free to share your thoughts.