Driverless.global
Driverless Autonomous Technology Business Directory

May 3

AIDriving COO Justin Xu Hopes to Enhance Fleet Safety With Artificial Intelligence

AIDriving COO Justin Xu Hopes to Enhance Fleet Safety With Artificial Intelligence

According to the industry's general view, the way to autonomous driving is divided into four stages. Now, the automotive industry is experiencing the transition from ADAS to semi-automated driving. There are many automakers, ADAS & autonomous driving startups, and investors are trying to seize their own opportunities.

1 justin

The chief operation officer of AIDriving, Mr. Justin Xu, is very glad to introduce their ADAS device to us. “AIDriving C7 for fleet safety is an industry-wide innovation with dual-camera design,” he said, ”one for forward collision avoidance and another for driver fatigue monitor.” The forward facing ADAS camera can recognize vehicle, pedestrian, cyclist, motorcycle driver and lane ahead. Meanwhile, the driver facing camera can recognize driver eye closed, yawn, inattention, phone using, smoking and driver absence.

 2 justin

Data shows that about 1.25 million people die each year as a result of road traffic accidents, which means 1 person is killed every 25 seconds. Traffic accidents not only kill and hurt people, but also affect the work-related fleets and insurance companies. Even though fleets have been equipped with a variety of Telematics hardware, there still are some problems faced by fleet owners.

Justin said, “Humans are unintentionally make mistakes, hence we’re training our artificial intelligence model to help drivers reduce the risk of collisions and save lives.” Based on artificial vision & deep learning technologies, AIDriving’s ADAS based fleet safety solution can enhance the fleet safety and provide totally a more secure driving environment.

 3 justin

As a leading global ADAS solutions provider, AIDriving is also the first ADAS startup in China(founded in 2011). The founder Mr. Xiaoguang Zhang has in the past worked for Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, with abundant experience in vehicle active safety system. In 2013, AIDriving released smartphone based ADAS APP and won 2 very important competitions: excellent player prize in Android World Global Developers Conference and runner-up in Hardeggs iFuture Hardware Competition. In 2014, AIDriving received around 5 million USD business financing then went to CES 2015 and released their first professional level ADAS device there. By the end of 2015, AIDriving acquired a hardware company.

 4 justin

AIDriving is now focusing on auto OEM market and fleet market. Before autonomous driving vehicle becomes a reality, OEM market has more urgent and bigger demands for ADAS. With the combination of vehicle CANBUS data and braking system, a vehicle will be more intelligent, for example, Automatic Emergency Braking and Adaptive Cruise Control.

In the aftermarket, fleet and insurance industries are the target markets for Mobileye and other ADAS companies. Countries like USA, Chile and China have released mandatory laws which require specific vehicles under operation to install aftermarket ADAS system. In Israel and Mexico, vehicles with Mobileye collision avoidance system will benefit in the form of insurance reduction.

Recently, the Ministry of Transport of the People’s Republic of China has issued an announcement which makes it mandatory for all vehicles under operation to install forward collision warning and lane departure warning systems.

 5 justin

As a result, all Chinese ADAS companies are ushering in a big market opportunity. Among them, AIDriving ranks the highest ADAS accuracy in China. The market feedback of AIDriving is also positive. Latest news is that AIDriving will receive their A+ round business financing in May. Good luck!

 

 

 

Mar 10

Microsoft has created an A.I. that can write its own code

 

Microsoft has now created an A.I. that can write its own code

microsoft deep code

Published on March 3rd 2017 


You know how teachers used to say, “You can’t cheat your way through life, so don’t cheat on this test.” They may have been right about the test...but not life.

Researchers at Microsoft and University of Cambridge have developed an Artificial Intelligence that’ll write code all by itself.

They call it DeepCoder

And it could change the job description for software developers.

The vision for DeepCoder is for a person to be able to merely give it an idea and the AI will automatically write all of the necessary code, without errors, in just seconds. More than anything, it will allow anybody with an idea to potentially build an internet business worth millions.

You’d think that DeepCoder would put a lot of programmers out of a job, but Armando Solar-Lezama, a professor at MIT, doesn’t think so. He believes this will enable programmers to attack more sophisticated problems, while the AI takes care of the tedious dirty-work.

Ironically, DeepCoder is a cheater itself

It works through a method called program synthesis, which essentially means stealing lines of code from other finished software. In the developer community, this is commonplace among the mid-to-lower level coders (script kiddies) because of the efficiency.

Currently, DeepCoder is capable of solving basic challenges one would see at programming competitions, nothing more than five lines of code at a time. But, it’s just starting out.

Its advantages are what sets it apart

Being that it is an AI, it has very little limitations to its work capabilities, allowing it to more swiftly and thoroughly scour source code databases, and put together programs in a way humans may not have thought of.

Most importantly, it has a great memory – reminding itself which code worked last time and which didn’t.

Honestly, why should I care

Aside from empowering the non-coder to build software, this signals the exciting times that are ahead for Artificial Intelligence and automation of white-collar jobs.

It can be fearful to think about a machine taking your job, but as long as you are aware of the possibility before it happens, you can prepare. Not to mention, AI will start by taking care of our dirty work.

For an accountant, AI will first learn to keep track of entries in the general ledger. For a marketer, AI will compile massive amounts of buying history and present you with a report on best marketing strategies. And for programmers, AI will write a lot of the time-consuming, simple code.

As long as you are open to adapting and aware of the changes, you won’t go extinct.

Feb 12

Quanergy's $250 Solid-State LiDAR - Self-Driving to the Masses?

 


Could Quanergys new $250 Solid State Lidar bring Self Driving to the Masses?

quanergy lidar
By Paul Godsmark

One of the big hurdles when equipping vehicles with sensors for autonomous driving is the cost. For example, the Light Detection and Ranging (LiDAR) sensors that power many versions of self-driving car technology are pricey, currently ranging from around several thousand dollars up to $85,000 per sensor—and vehicles often need multiple sensors to see enough of what is going on around them to drive safely.

But the latest entry into the LiDAR market could change all that.  Quanergy introduce the world's first affordable solid-state LiDAR, coming in at about $250.

LiDAR Sensors Are Key

Quanergy builds LiDAR systems, a remote sensing method that uses light in the form of a pulsed laser to measure ranges (variable distances) from the car to everything around it, enabling the car to track its environment better than other types of sensor.

Radar-based systems are really good at operating in all weather, as they see through anything obscuring vision. However, they lack resolution, and while they can tell you that something is there, they can't tell if that something is a car, a barrier, or even a wall.


Vision-based systems are great at determining shapes and colors, are essential for reading traffic signals, and allow rapid categorization of objects in the the field of view such as pedestrians and cyclists. But vision systems don't work so well at night in the dark, and they are not good at precisely tracking distances and velocities of moving objects.

This is the strength of the new higher resolution LiDAR, as they can literally provide a 3D view of the world as the infrared pulses bounce off of surrounding objects and return to the sensor with precise distance information. Because the pulses are so close, the speed at which objects are moving relative to the vehicle is easily calculated.


The $250 price point is a really big deal for the automated vehicle ecosystem, as the cost of the LiDAR technology specifically has been a major stumbling block to the deployment of driverless cars.


For comparison, the  that has been used by Google, Uber, Toyota, Bosch, etc. was often quoted at costing between $75,000-$85,000 only 6 years ago. More recently, LiDAR costs have tumbled to around $8,000 with the less capable model, of which is on their initial prototype vehicles.


Prices may have been inflated with Velodyne facing little in the way of competition for high-end automotive-grade LiDAR systems, and the imminent arrival of a $250 unit from Quanergy should drastically shake up the market. Suddenly, affordable ride-sharing driverless fleets for public use looks to be a far more realistic possibility.


'Production Lines Have Been Built'

"The new S3 solid state LiDAR meets all the  and we are already ramping up to mass produce them," Louay Eldada, founder of Quanergy, told Driverless in an exclusive interview. "Production lines have been built."


With volume production in the region of 100,000 units, the cost is expected to be $250 or less per sensor. For that, the customer will receive a solid-state LiDAR contained in a 9 x 6 x 6 centimeter housing that can detect objects as far as 150 meters away (at 80% reflectivity), and as close as 10 centimeters. The 120-degree field of view in both horizontal and vertical planes implies that multiple S3 units would be needed around the vehicle to provide the necessary sensor coverage for the safest possible driving. And if you're worried about the sensors being vulnerable to hacking—Eldada says they've engineered the sensor with seven layers of protection to make sure the system can't be tricked.


Importantly, because the S3 is a solid-state LiDAR, it isn't vulnerable to facing into the sun as other types of LiDAR are. Eldada explained that they have done testing, and unlike LiDAR that have physical mirrors that are always gathering light and therefore prone to being blinded, the solid-state optical phase array is like an electronic lens that is only turned on and gathering light waves precisely when it is needed.


Justifying 'Unicorn Status'

Back in August of last year, Quanergy joined the exclusive "unicorn-club" with a 90-Million Funding, valuing the startup in excess of a billion dollars. Eldada explained that this funding deal was done in June 2016, and that he doesn't currently foresee the need for any further funding rounds, although an IPO (an initial public offering, when a company raises funds by offering publicly traded shares) shouldn't be ruled out.


It is possible that the B funding round didn't make waves because the company promised much last year at CES 2016, but did not subsequently publicly demonstrate their solid-state technology in a way that satisfied the skeptics. This latest news appears to build a more solid foundation for the future, justifying the faith of the early Quanergy investors in what appears to be a game changing technology for affordable self-driving vehicles.


This is a very big deal for far more than just the automotive sector. LiDAR is used in many different sectors; Eldada is expecting initial sales will likely go to mining, agriculture, and security companies. But mobility (in the form of driverless vehicles) is where the biggest bucks will eventually be made.


Growth from all areas where automated systems can provide safety and efficiency benefits will likely lead to a multibillion dollar market. According to Quanergy, they expect the LiDAR market to exceed $1 billion by 2020 and $3 billion by 2022.


Driverless Breakthroughs Expected in 2018

Even though the new vehicle cycle time for major automakers (the time to design, prototype, and manufacture new models) has been steadily falling over the last decade from around 5 years to closer to 3 years, Eldada will not be surprised if it only takes another year before a major increase in demand for LiDAR from the automotive sector.

Eldada thinks we are likely to see some form of fully automated vehicles operating on public roads, even if constrained by speed or geofencing, in 2018 (not 2017).

Some automated vehicle developers are trying to achieve 2017, but 2018 is more likely to be the breakthrough year as even with the shorter cycle times it takes time to manufacture the vehicles and get them through the development, production and sales cycle. The newer, faster moving, more aggressive automakers are putting pressure on the major automakers, but the volumes that they have are not the same. Quanergy will be active in this space and ready to support any automakers; and there will be announcements later this year.

If we do see some fully automated vehicles commercially deployed on public roads in the next couple of years, then there's a reasonable chance that Quanergy's affordable LiDAR technology will have played an important role in making it possible.

Dec 20

Magna teams with Innoviz LiDAR for Autonomous driving system

 

 

Magna teams with Innoviz LiDAR for Autonomous driving system

innovizone

(@etherington

Global automaker supply leader Magna International is teaming up with LiDAR-maker Innoviz to help fill out its sensor fusion picture for self-driving vehicles, the companies announced Tuesday. The addition of Innoviz’s LiDAR sensor means Magna’s suite now includes carmakers, ultrasonic and RADAR fusion sensing capabilities, which essentially covers the range of available sensing techniques used in creating a three-dimensional, highly accurate and live image of the world surrounding an autonomous vehicle.

Innoviz will be showing off prototypes of upcoming LiDAR hardware it plans to bring to production at CES this year. The key to wider use of LiDAR in autonomous systems is reducing size and cost, and that’s definitely going to be true for Magna, which is going to be attempting to sell its driverless / self-driving systems as a Tier 1 supplier to automakers, providing them with everything they need to field production self-driving cars once the tech gets to where it needs to be, and regulation also catches up to the pace of industry progress.

Innoviz is pursuing solid-sate LiDAR tech, which will help reduce cost and size of components by minimizing the number of moving parts required in a LiDAR sensor’s construction. The InnovizOne it’s demoing at CES has a 200m detection range, and yet costs just $100 per unit, with a 5x5x5cm footprint and improved durability vs. previously LiDAR designs.

 

Magna, which is a key supplier to essentially ever major automaker in operation right now, has been investing heavily in autonomous driving technology, but has previously cautioned that it still believes true self-driving tech won’t be widely available to consumers for a long while yet, with cost accounting for part of that.

Dec 13

Google’s self-driving car project is now a new company called Waymo

 

 

Google’s self-driving car project is now a new company called Waymo
waymo logo

By Katie Burke

This new Google company led by CEO John Krafcik, exists under the Google parent company Alphabet and will operate like a “venture-backed startup,” Krafcik said at a press event today. Waymo will be based in Mountain View, Calif., and will be responsible for developing self-driving technology and will explore opportunities in trucking, logistics and automaker partnerships.

“We are a self-driving technology company with a mission to make it safe and easy for people to move around,” Krafcik said, emphasizing that Waymo is not a car company.
waymo2

Until now, the program has been part of secretive research unit Google X.
Waymo stands for "A new way forward in mobility," according to Krafcik.

The company’s first driverless car ride on public roads -- without a steering wheel or brake pedal -- happened in Austin, Texas, in October 2015, and 10,000 similar tests have since taken place.

"Waymo’s next step will be to let people use our vehicles to do everyday things like run errands, commute to work, or get safely home after a night on the town," the company said in a statement.

Krafcik added that the company is in “build phase” in its partnership with Fiat Chrysler Automobiles to develop 100 self-driving Chrysler Pacifica minivans, outfitting the vehicles with updated sensor systems.

Waymo’s autonomous system uses radar, camera and lidar sensors, and the company is developing primarily Level 4 and Level 5 technology. Nathaniel Fairfield, Waymo’s principal software engineer, said the sensors have been able to handle rough weather conditions.

The spinoff shows Alphabet believes there is a market for these cars, but it’s still uncertain whether consumers want self-driving technology.

“The question remains whether consumers are ready for this, since most prefer at least an option to take over the driving,” said Rebecca Lindland, senior analyst for Kelley Blue Book.

More engineers

Google has expanded its program over the past year, hiring more engineers while doubling its testing centers from two U.S. cities to four.

Although there have been some significant departures over the past year -- Chief Technical Officer Chris Urmson left in August after leading the project from its inception -- some new hires have pointed to the program's readiness to move past its experimental stage.

In July, the project appointed its first general counsel and a month later it hired former Airbnb executive Shaun Stewart as director of the project, with a mandate to commercialize the company's self-driving technology.

Krafcik, 55, the former Hyundai Motor America CEO and longtime Ford executive, joined Google in September 2015.

Reuters also contributed to this report.

This email address is being protected from spambots. You need JavaScript enabled to view it.

Dec 13

GM to mass produce autonomous Chevrolet Bolts in January

 

 

GM to mass produce autonomous Chevrolet Bolts in January

GM bolt

 This email address is being protected from spambots. You need JavaScript enabled to view it. 

General Motors today said it will be the first company to mass-produce autonomous vehicles, with test versions of Chevrolet Bolts equipped with self-driving technology planned to start rolling out of a Michigan plant in January.

GM CEO Mary Barra also said the company will start testing autonomous Bolts on public roads near the GM Technical Center in suburban Detroit “immediately,” in response to state legislation enacted last week that allows wider use of the vehicles. She said metropolitan Detroit will be GM’s primary test area for snow and cold-weather driving.

GM already has been testing about 40 self-driving Bolts on roads in San Francisco and Scottsdale, Ariz. The fleet used in Michigan will be “significantly larger” than that, said Doug Parks, GM’s vice president of autonomous technology and vehicle execution.

“We’re ensuring that our AVs can operate safely across a full range of road, weather and climate conditions,” Barra said at a news conference this afternoon.

Barra said workers at GM’s plant in Orion Township, Mich., will build Bolts with cameras, sensors, lidar and other technology needed for autonomous operation. The plant already makes the regular Bolt, an electric car that can go an estimated 238 miles on a full battery, and the Chevy Sonic subcompact car.

GM’s autonomous testing in Michigan has thus far been limited to private roads within the 1 square-mile campus of its Technical Center. Barra said that will now expand to roads in the nearby area, followed later by testing throughout the Detroit region.

Michigan Gov. Rick Snyder last week signed legislation that encourages testing and deployment of autonomous vehicles across the state. The package of bills allows automakers and ride-hailing or ride-sharing companies to operate vehicles without a steering wheel or other controls for a human driver.

Barra declined to say when GM expects to start offering autonomous vehicles publicly. GM made a $500 million investment in the ride-hailing company Lyft last January and has said it plans to eventually add autonomous Bolts to Lyft’s fleet within several years.

GM began selling the standard Bolt EV in California and Oregon this week. It plans to expand availability to New York, Massachusetts and Virginia in early 2017 and nationwide around the middle of the year.

GM is one of many companies racing to make autonomous vehicles ready for public use, a process that involves testing for countless hypothetical scenarios to ensure safety. The companies see the technology as a way to potentially save tens of thousands of lives each year, because most fatal crashes are the result of human error.


Twitter: http://www.twitter.com/nickbunkley

Dec 12

Autonomous Infrastructure by Overland ATS

 

 

Autonomous Infrastructure by Overland ATS

 By Waldemar F. Kissel, Jr. - Founder and president of Overland ATS, LLC

There are two varieties of autonomous vehicle transportation. The first we are all familiar with: cars with autonomous capability driving on conventional roads. This is the image people generally conjure in their minds upon hearing the phrase “autonomous cars”. There is, however, a second type of autonomous transportation, and that is autonomous infrastructure. This technology allows either traditional or autonomous cars to travel autonomously while on the infrastructure, creating a safer system for all.

Overland Screenshot 2 002

We still need mainstream automotive infrastructure (what we commonly refer to as highways). Transportation consumers, as well as federal, state and urban transportation agencies want a replacement for our existing technique for building the local transportation infrastructure we travel on every day. The current method of building roads was pioneered by the Persians and Mesopotamians. This is the infrastructure that we use for commuting, shopping, going to school, for traveling between urban areas, for taking road trips. This is the part of our urban transportation system we rely on for everyday use and it causes us to get stressed over traffic congestion, gridlock, and insufficient road capacity. The autonomous system is accessible for everyday use.

Let’s face it--high speed rail, commuter trains, people movers, and mass transit are not convenient enough to relieve the problem. Each of these may serve its own special purpose, but that is not the source of consumer traffic concerns or of their dissatisfaction with existing roadways.

Urban residents do not want neighborhoods separated by multi-lane expressways. Consumers do not want more roadways that quickly become gridlocked. Large swaths of asphalt throughout dense urban areas create hot spots and contribute to global warming. Consumers want infrastructure with much greater capacity, a lower profile footprint, low cost and one that does not get gridlocked.

The Federal Highway Administration, US DoT, Texas DoT, Texas A&M University, and Texas Transportation Institute collaborated to prepare a set of guidelines for the ideal transportation solution. This was a masterpiece listing 28 fundamental characteristics for an ideal autonomous solution. However, they did not reduce this list to a physical embodiment. There are no drawings or descriptions of a system. They examined 200 concepts from all over the world and rejected all of them. There was one concept that did not get evaluated because it was not ready: the Overland Automated Transportation System.

Transit Terminal 002

In the referenced study they specified an arterial network for each urban area and an infrastructure on both sides of all interstate highways. Overland ATS would accommodate these requirements with a total 375,000 miles of infrastructure which would carry over 70 percent of all vehicle miles travelled at a velocity of 150 MPH. The vehicles are owned by private individuals, private entities, government entities and fleet owners. These are conventional autonomous vehicles. If Overland cost $5 million per mile, the investment would be $1.875 trillion. The infrastructure is passive so it only incurs expense when there is traffic and that is then charged to the traffic.

This infrastructure is another dimension of autonomous transportation. The vehicles, whether manual or autonomous, are controlled by the infrastructure. All any vehicle really needs to do is maintain its speed at 150 MPH. The infrastructure has no moving parts. There are computers, sensors, algorithms, wireless connectivity, but these are not moving parts.

The infrastructure, known as Overland ATS does all the rest. The Overland system is dual-mode, elevated, electrified, immune to hackers, immune to all kinds of weather/atmospheric conditions, protected against flash flooding, hurricanes, tornadoes, highjacking (freight) and kidnapping. This infrastructure can facilitate the transition to autonomous transportation. It is far more advantageous for electric autonomous vehicles to travel on the dual-mode infrastructure when traveling distances greater than a couple of miles.

All vehicles are insured in full by the infrastructure while on the infrastructure.

Dec 9

Artificial Intelligence - the Vaticans view



Artificial Intelligence - the Vaticans view
Hands technology - Credit John Williams RUS Shutterstock CNA

Credit: John Williams RUS via Shutterstock

Article by Elise Harris

A new conference at the Vatican drew experts in various fields of science and technology for a two-day dialogue on the “Power and Limits of Artificial Intelligence,” hosted by the Pontifical Academy for Sciences.This week the Vatican hosted a high-level discussion in the world of science, gathering experts to discuss the progress, benefits and limits of advances in artificial intelligence.

Among the scheduled speakers were several prestigious scientists, including Stephen Hawkins, a prominent British professor at the University of Cambridge and a self-proclaimed atheist, as well as a number of major tech heads such as Demis Hassabis, CEO of Google DeepMind, and Yann LeCun of Facebook.

The event, which ran from Nov. 30-Dec. 1, was hosted at the Vatican's Casina Pio IV, the headquarters of the Pontifical Academy for Sciences, which is headed by their chancellor, Bishop Marcelo Sanchez Sorondo.

Werner Arber, a Protestant and president of the academy who works in the field of evolutionary biology, said that while artificial intelligence isn't his specific area, it's important for the Vatican entity to have a voice in the discussion, since their task is “to follow all actual developments in the field of natural sciences” in order to stimulate further research.

As far as the discussion on artificial intelligence is concerned, Arber said it's important to understand current developments, which include increasing dialogue as to whether research done on natural sciences can then be applied to the field of machinery and robotics.

Part of the debate, he said, has been whether or not machines could eventually take on some of the work human beings have traditionally done. However, he cautioned that there would be some “social-scientific implications,” since this could eventually lead to less work for people.

This is “an ethical aspect, do we want that or not?” Arber said, noting that human beings have a unique thinking and problem-solving capacity, and “it’s not good” if this gets pushed too far to the side.

It's a “very important task of our human life...so we have to be careful to preserve our duties,” he said.

Also present at the meeting was Demis Hassabis, CEO of British artificial intelligence company DeepMind, founded in 2010 and acquired by Google in 2014. He spoke on the first day of the conference about the possibility of moving forward “Towards Artificial General Intelligence.”

Part of Hassabis' work involves the science of “making machines smarter,” and trying to build learning systems that allow computer systems to learn directly from data and experience in order to eventually figure out tasks on their own.

In comments to CNA, he noted how he has established an ethics board at the company to ensure that things don’t get out of hand while research is moving forward.

Artificial intelligence “is a very powerful technology,” he said, explaining that while he believes technologies in and of themselves are neutral, “it depends on what you end up using that technology for.”

“So I think as a society we need to think very carefully about the ethical use of technologies, and as one of the developers of this kind of artificial intelligence technology we want to be at the forefront of thinking how to use it responsibly for the good of everyone in the world,” he said.

One of the ways his company's work is currently effecting Google is through little things such as how to organize photos and recognize what’s in them, as well as the way a person’s phone speaks to them and the optimization of energy that Google’s data centers use.

Hassabis said he thinks it’s “really interesting” to see the wider Catholic community taking an interest in the discussion, and called the Church’s involvement a great way “to start talking about and debating” how artificial intelligence “will affect society and how we can best use it to benefit all of the society.”

Stanislas Dehaene, a professor cognitive neuroscience at the College de France and a member of the Pontifical Academy of Sciences, was also present at the gathering, and spoke to participants on day two about “What is consciousness, and could machines have it?”

Dehaene told CNA/EWTN News that “enormous progress” has been made in terms of understanding the brain, and in part thanks to these advancements, great steps have also been taken in modeling neuro-networks which eventually lead “to superb artificial intelligence systems.”

With a lot of research currently being done on consciousness, Dehaene said a true “science of consciousness” has developed to the point that what happens to the brain when it becomes aware of a piece of information is now known “to such a point that it can be modeled.”

“So the question is could it be put in computers?” he said, explaining that this is currently being studied. He said he personally doesn’t know yet whether there is a limit to the possibilities for artificial intelligence, or what it would be.

However, he stressed that “it's very important” to consider how further advances in artificial intelligence “will modify society, how far can it go and what are the consequences for all of us, for our jobs in particular,” he said.

Part of the discussion that needs to take place, Dehaene said, is “how to put ethical controls in the machines so they respect the laws and they respect even the moral laws” that guide human decisions.

“That is an extremely important goal that has not been achieved yet,” he said, adding that while he personally doesn’t have a problem with a machine making ethical judgments similar to that of a human being, the question “is how to get there” and how to make sure “we don't create a system that is full of machines that don’t look like humans, that don’t share our intuitions of what should be a better world.”

Another major tech head present for the conference was Professor Yann LeCun, Director of Artificial Intelligence Research at Facebook.

What they try to do at Facebook is to “push the state of the arts to make machines more intelligent,” LeCun told CNA. The reason for this, he said, is that people are increasingly interacting through machines.

Artificial intelligence “would be a crucial key technology to facilitate communication between people,” he said, since the company’s main focus “is connecting people and we think that artificial intelligence has a big role to play there.”

Giving an example, LeCun noted that every day Facebook users upload around 1 billion photos and that each of them are recognized, and artificial intelligence systems then monitor the content of the photo in order to show users more images they might be interested in, or filter those they might object to.

“It also enables the visually impaired to get a textual description of the image that they can't see,” he said, “so that is very useful.”

In terms of how this technology might transform the way we live, LeCun said that within the next few years or even decades, “there will be transformative applications” of artificial intelligence visible and accessible to everyone.

Self-driving cars, the ability to call a car from your smartphone instead of owning one, no parking lots and safer transportation are all things the LeCun said he can see on the horizon, with medical advances being another area of rapid growth.

“There are already prototype systems that have been demonstrated to be better than human radiologists at picking out cancerous tumors,” he said, explaining that this alongside a “host of other applications” are going to make “a big difference.”

When it comes to the ethics of the discussion, LeCun noted that there are both short-term and long-term concerns, such as “are robots gonna take over the world?”

“Frankly these are questions that we are not worried about right now because we just don't have the technology that's anywhere near the kind of power that's required. So these are philosophical discussions but not immediate problems,” he said.

However, short-term debate points include how to make the artificial intelligence systems that already exist safer and more reliable.

LeCun noted that he has helped set up a discussion forum called “Partnership for AI” that was co-founded by Facebook, Google, Microsoft, Amazon and IBM in order to facilitate discussion on the best ways to deploy artificial intelligence.

Both ethical and technical questions are brought up, he said, noting that since it's a public forum, anyone from different fields such as academia, the government, social scientists and ethicists are able to participate and offer their contributions.

Dec 4

Tesla begins its new Autopilot roll out by the end of 2016

 

Tesla Motors Autopilot to rollout earlier than expected

tesla

 

 


Following his previous announcement that its autonomous Autopilot vehicles would be available by the end of 2017, Tesla’s CEO Elon Musk let the cat out of the bag in a Tweet that mentioned an earlier date.

Musk appeared to have responded to a Twitter question regarding the Autopilot rollout, confirming the release will begin in “about three weeks.”

All Tesla cars which were released after October 19th included new built-in hardware, expected to improve Autopilot’s features.

The cars were later stripped of many first-generation Autopilot features following issues.

Tesla said that radar will now be used as a primary control sensor without requiring confirmation from the camera. This will involve fleet learning, which will monitor the actions of other vehicles in response to a hazard and use this information to determine what the Tesla should do.

Elon Musk TwitterElon Musk spotted responding to question regarding Autopilot, with earlier release date.

This follows with the recent tweet from Musk, which confirmed that the navigated Autopilot features would be added back in mid-December. It was also added that the Autopilot 8.1 features will be rolled out consistently on a monthly basis.

These features may be such as, adaptive cruise control and autonomous emergency braking, those of which were initially disabled.

According to CNET, it is likely that the first stage of this rollout deliver Tesla’s new Model X and Model S vehicles in comparison with its older vehicles currently on road.

The new models, using cameras and radar, will enable them to match the speed of the cars in front, stay in lane, switch lanes and also perform basic self-parking.

Tesla

 

 

 

 

 

 

 

 

 

 


It is not expected that all of the new ‘Enhanced Autopilot’ will be made available next month, but with the announcement of monthly rollouts, it is expected that majority will be ready and in full speed earlier than initially announced.

Dec 3

3 groups to test Driverless cars in Ontario early 2017

 

 

Driverless cars to hit the road in Ontario early next year
self driving car 1086x722
(JARED WICKERHAM / AP FILE PHOTO)


Three groups have been approved to test self-driving cars on public roads in Ontario.

MICHAEL LEWIS Business Reporter

 

Driverless cars are headed for Ontario’s public roads as part of a pilot project the province says puts it in the pole position in Canada’s development of autonomous vehicle technology.

After motoring up a test track to a Waterloo press briefing Monday in a self-propelled Lincoln MKZ hybrid sedan nicknamed the Autonomoose, Transportation Minister Steven Del Duca said three groups have been approved to begin road testing.

The University of Waterloo’s Centre for Automotive Research will test the MKZ likely starting in the first three months of next year while Erwin Hymer Group, the Kitchener-based manufacturer of Roadtrek motor homes, also received approval.

The third approval was granted to QNX, a division of Waterloo’s BlackBerry, which will develop vehicle software in association with its test of automated features of a 2017 Lincoln.

The researchers are evaluating how the vehicles operate under various weather and road conditions, speeds and degrees of automation.

The goal of the Waterloo research team is to progressively add more automated features, while specific aims include improving automated driving in challenging Canadian weather conditions, further optimizing fuel efficiency to reduce emissions — and designing new computer-based controls.

The researchers will test vehicles everywhere from city streets to divided highways, with a licensed and insured driver required to be in the driver’s seat to take over if needed. Vehicles also include a prominent “off” button that disables autonomous systems.

“The ability to take this research work to the next level while safely testing on all kinds of roads in Ontario represents a significant leap forward in this field,” said Krzysztof Czarnecki, a lead researcher and professor in Waterloo’s Department of Electrical and Computer Engineering and the Cheriton School of Computer Science.

Just over a year ago Ontario became the first province in Canada to approve testing of automated vehicles and related technologies on its roads, announcing a 10-year project that it said provided an opportunity “to be a world leader in automated technology.”

Del Duca said Monday that testing could be conducted anywhere in Ontario, although discussions are taking place with municipalities including Stratford, which had been a rumoured site along with a number of other industry partners.

“I think we are going to have additional updates in the weeks and months ahead,” said Del Duca.

He suggested that more participants could be announced and said an ecosystem needs to be developed that aligns Wi-Fi and other infrastructure with communications systems in vehicles.

At the time of the pilot project’s unveiling last October, officials said nearly 100 companies and institutions were involved in R&D for the connected vehicle and automated vehicle industry in the province, supporting opportunities to ultimately bring automated vehicles to market.

Del Duca said Ontario wants to lead in self-driving development without compromise to road safety, adding that more work is being done to strengthen safety assurances.

Self-driving vehicles are capable of detecting the surrounding environment using artificial intelligence, sensors and global positioning system coordinates.

Ontario says the technologies have the potential to help improve fuel efficiency, as well as reduce traffic congestion, greenhouse gas emissions and driver distraction.

Home to five major global automotive assemblers as well as truck manufacturer Hino, the province says it supports transportation innovation as part of its plan to fuel economic growth and create jobs. It has provided $2.95 million in funding through the Ontario Centres of Excellence Connected Vehicle/Automated Vehicle Program.