|
Howdy !
It's me Scarlett !
This week we have 4 topics.
◈ Love : These are the questions one writer says can make you fall in love with a stranger
◈ Healthcare : 7 Things That Will Happen When You Start Doing Planks Every Day
◈ Psychology : Try something new for 30 days
◈ Tech : Can We Create an Ethical Robot?
Hope you enjoy the topics.
With luv
Scarlett
These are the questions one writer says
can make you fall in love with a stranger
Erin Brodwin / Jan. 13, 2015, 11:23 AM
What if love weren't as passive as we tend to picture it being?
What if, instead of stumbling into it as a result of chance or fate, we actively choose it?
In 1997, State University of New York psychologist Arthur Aron tested the idea that two people who were willing to feel more connected to each other could do so, even within a short time.The experiment is featured prominently in a recent Modern Love column in The New York Times, in which the author pointed to the questions as the springboard into her own romance; more on that here.
For his study, Aron separated two groups of people, then paired people up within their groups and had them chat with one another for 45 minutes. While the first group of pairs spent the 45 minutes engaging in small talk, the second group got a list of questions that gradually grew more intimate.
Not surprisingly, the pairs who asked the gradually more probing questions felt closer and more connected after the 45 minutes were up. Six months later, two of the participants (a tiny fraction of the original study group) even found themselves in love — an intriguing result, though not a significant one.
Here are the 36 questions the pairs in Aron's test group asked one another, broken up into three sets. Each set is intended to be more intimate than the one that came before.
Question Set 1
1. Given the choice of anyone in the world, whom would you want as a dinner guest?
2. Would you like to be famous? In what way?
3. Before making a telephone call, do you ever rehearse what you are going to say? Why?
4. What would constitute a "perfect" day for you?
5. When did you last sing to yourself? To someone else?
6. If you were able to live to the age of 90 and retain either the mind or body of a 30-year-old for the last 60 years of your life, which would you want?
7. Do you have a secret hunch about how you will die?
8. Name three things you and your partner appear to have in common.
9. For what in your life do you feel most grateful?
10. If you could change anything about the way you were raised, what would it be?
11. Take four minutes and tell your partner your life story in as much detail as possible.
12. If you could wake up tomorrow having gained any one quality or ability, what would it be?
Question Set 2
13. If a crystal ball could tell you the truth about yourself, your life, the future or anything else, what would you want to know?
14. Is there something that you’ve dreamed of doing for a long time? Why haven’t you done it?
15. What is the greatest accomplishment of your life?
16. What do you value most in a friendship?
17. What is your most treasured memory?
18. What is your most terrible memory?
19. If you knew that in one year you would die suddenly, would you change anything about the way you are now living? Why?
20. What does friendship mean to you?
21. What roles do love and affection play in your life?
22. Alternate sharing something you consider a positive characteristic of your partner. Share a total of five items.
23. How close and warm is your family? Do you feel your childhood was happier than most other people’s?
24. How do you feel about your relationship with your mother?
Question Set 3
25. Make three true "we" statements each. For instance, "We are both in this room feeling ______."
26. Complete this sentence: “I wish I had someone with whom I could share _______.”
27. If you were going to become a close friend with your partner, please share what would be important for him or her to know.
28. Tell your partner what you like about them; be very honest this time, saying things that you might not say to someone you’ve just met.
29. Share with your partner an embarrassing moment in your life.
30. When did you last cry in front of another person? By yourself?
31. Tell your partner something that you like about them already.
32. What, if anything, is too serious to be joked about?
33. If you were to die this evening with no opportunity to communicate with anyone, what would you most regret not having told someone? Why haven’t you told them yet?
34. Your house, containing everything you own, catches fire. After saving your loved ones and pets, you have time to safely make a final dash to save any one item. What would it be? Why?
35. Of all the people in your family, whose death would you find most disturbing? Why?
36. Share a personal problem and ask your partner’s advice on how he or she might handle it. Also, ask your partner to reflect back to you how you seem to be feeling about the problem you have chosen.
Try them out, and let us know what happens.
Article source : http://www.businessinsider.com/questions-psychologist-says-can-make-you-fall-in-love-2015-1?utm_content=buffer0f6cc&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer
<Questions>
Q1. What is your definition of LOVE?
Q2. Why do you love someone?
Q3. Could you explain changes of your status when you are falling in love?
Q4. Have you ever loved someone you should not?
Q5. How do you deal with your relationship when your passionate emotion to your partner is fade away from you?
Q6. What is your favorite movie or song in terms of love?
Q7. Do you think you can fall in love with a stranger?
Q8. Could you describe your ideal type? Who is close to your ideal type among celebrities? Why do you select him/her as your ideal type?
Q9. Please try above 36 questions out with your table members. And please let us know your status. Do you feel more closeness to your table members to fall in love with them?
7 Things That Will Happen
When You Start Doing Planks Every Day
FITNESS LIFESTYLEBY SZABO LASZLO
Set a goal for yourself
"Without good health, it's hard to enjoy life.
So I'm going to keep myself and my loved ones always healthy."
Bodyweight exercises are gaining ground in the fitness world due to the practicality and simplicity of getting in shape using your own body weight. Planks are one form of bodyweight exercises that will never go out of fashion. Planks are one of the most effective exercises you can do. Why? Because they require a small time investment on your part, and offer the chance to achieve substantial results in a relatively short span of time.
Abdominal muscles must provide support for our entire back and spinal column.. In doing so, they also play a vital role in preventing injuries. However, for them to perform this function successfully, our core muscles have to be strong and trained on a regular basis. What all this means is that doing plank exercises every day is a great way to strengthen your core, and in doing so, support your spine.
Now, let’s focus on what will happen when you start doing planks every day:
1. You’ll improve core definition and performance:
Planks are an ideal exercise for the abdominal muscles exactly because they engage all major core muscle groups including the transverse abdominus, the rectus abdominus, the external oblique muscle, and the glutes. The importance of strengthening each muscle group cannot be underestimated either, for all of these groups serve their own purpose. If you strengthen these muscle groups you will notice:
Transverse abdominis: increased ability to lift heavier weights.
Rectus adbominis: improved sports performance, particularly with jumping. This muscle group is also responsible for giving you the renowned six pack look.
Oblique muscles: improved capacity for stable side-bending and waist-twisting
Glutes: a supported back and a strong, shapely booty.
2. You’ll decrease your risk of injury in the back and spinal column
Doing planks is a type of exercise that allow you to build muscle while also making sure that you are not putting too much pressure on your spine or hips. According to the American Council on Exercise, doing planks regularly not only significantly reduces back pain but it also strengthens your muscles and ensures a strong support for your entire back, especially in the areas around your upper back.
Check out this article if you would like to find out about how doing planks on different surfaces can impact the effectiveness of this exercise in strengthening your core.
3. You’ll experience an increased boost to your overall metabolism
Planking is an excellent way of challenging your entire body because doing them every day will burn more calories than other traditional abdominal exercises, such as crunches or sit-ups. The muscles you strengthen by doing this exercise on a day-to-day basis will ensure that you burn more energy even when sedentary. This is especially important if you are spending the majority of your day sitting in front of a computer. Also, making it a daily 10- to 1 minute home exercise before or after work will not only provide an enhanced metabolic rate but it will also ensure that that metabolic rate remains high all day long, (yes, even while you are asleep).
4. You’ll significantly improve your posture
Doing planks greatly improves your ability to stand with straight and stable posture. Through strengthening your core you will be able to maintain proper posture at all times because muscles in the abdomen have a profound effect on the overall condition of your neck, shoulders, chest and back.
5. You’ll improve overall balance
Have you ever felt that when you tried standing on one leg, you couldn’t stand up straight for more than a couple of seconds? It’s not because you were drunk- unless you happened to be at the time!- but rather, it’s because your abdominal muscles weren’t strong enough to give you the balance you needed. Through improving your balance by doing side planks and planks with extensions you will boost your performance in every kind of sporting activity.
6. You’ll become more flexible than ever before
Flexibility is a key benefit of doing planks regularly, for this form of exercise expands and stretches all your posterior muscle groups – shoulders, shoulder blades, and collarbone – while also stretching your hamstrings, arches of your feet, and toes. With a side plank added in to the mix, you can also work on your oblique muscles. This will provide you with further benefits when it comes to hyper-extending your toes, a movement that is crucial for supporting your body’s weight.
7. You’ll witness mental benefits
Plank exercises have a particular effect on our nerves, making them an excellent means of improving overall mood. How? Well, they stretch out muscle groups that contribute to stress and tension in the body. Just think about it: you are sitting in your chair, at home or at work, all day long; your thigh muscles get tight, your legs get heavy due to being bent for several hours; and tension develops in your shoulders due to being forced to slump forward all day. These are all circumstances that put too stress on the muscles and nerves. The good news is that planks not only calm your brain, but they can also treat anxiety and symptoms of depression– but only if you make it part of your daily routine.
Now, the last thing left to do is to give you a sample plank exercise you can do to achieve great results in only 5-10 minutes a day.
Here is a great infographic that shows some of the best plank exercises to evenly target all abdominal muscle groups:
Are you ready to devote 5-10 minutes of your day, every day, to stay fit, healthy and, most importantly, strong as a bull? Then jump in and make doing plank exercises a part of your life.
Featured photo credit: krogers2013 via flickr.com
Article source : http://www.lifehack.org/292578/7-things-that-will-happen-when-you-start-doing-planks-every-day
<Questions>
Q1. Do you exercise on a regular basis? How often do you exercise? And what is it?
Q2. Which exercise do you prefer between group activity and solo activity?
Q3. When you do exercise where do you prefer between indoor or outdoor?
Q4. Do you know how to do planks? please explain it to us.
Q5. Would you like to meet someone who have similar hobby? What is your hobby?
Q6. To live a successful life, what activities do you keep up ?
Try something new for 30 days
A few years ago, I felt like I was stuck in a rut, so I decided to follow in the footsteps of the great American philosopher, Morgan Spurlock, and try something new for 30 days. The idea is actually pretty simple. Think about something you've always wanted to add to your life and try it for the next 30 days. It turns out 30 days is just about the right amount of time to add a new habit or subtract a habit -- like watching the news -- from your life.
There's a few things I learned while doing these 30-day challenges. The first was, instead of the months flying by, forgotten, the time was much more memorable. This was part of a challenge I did to take a picture every day for a month. And I remember exactly where I was and what I was doing that day. I also noticed that as I started to do more and harder 30-day challenges, my self-confidence grew. I went from desk-dwelling computer nerd to the kind of guy who bikes to work. For fun!
Even last year, I ended up hiking up Mt. Kilimanjaro, the highest mountain in Africa. I would never have been that adventurous before I started my 30-day challenges.
I also figured out that if you really want something badly enough, you can do anything for 30 days. Have you ever wanted to write a novel? Every November, tens of thousands of people try to write their own 50,000-word novel, from scratch, in 30 days. It turns out, all you have to do is write 1,667 words a day for a month. So I did. By the way, the secret is not to go to sleep until you've written your words for the day. You might be sleep-deprived, but you'll finish your novel. Now is my book the next great American novel? No. I wrote it in a month. It's awful.
But for the rest of my life, if I meet John Hodgman at a TED party, I don't have to say, "I'm a computer scientist." No, no, if I want to, I can say, "I'm a novelist."
So here's one last thing I'd like to mention. I learned that when I made small, sustainable changes, things I could keep doing, they were more likely to stick. There's nothing wrong with big, crazy challenges. In fact, they're a ton of fun. But they're less likely to stick. When I gave up sugar for 30 days, day 31 looked like this.
So here's my question to you: What are you waiting for? I guarantee you the next 30 days are going to pass whether you like it or not, so why not think about something you have always wanted to try and give it a shot! For the next 30 days.
Thanks.
Article source : http://www.ted.com/talks/matt_cutts_try_something_new_for_30_days/transcript?language=en
<Questions>
Q1. Could you recommend some activities which are motivational or intriguing?
Q2. Have you ever felt that you are stucked in a rut? How did you overcome it?
Q3. What are the main reasons for people to be in a state of lethargy in their lives?
Q4. Do you have bucket lists? Please name three of them.
Q5. How do you think about trying something new for 30 days? Do you think it can bring you back to a vivid life again?
Q6. When you hit a slump, how do you get over it? Do you have any secret ways?
Can We Create an Ethical Robot?
Without our social sense, an android will buy that last muffin,
and a driverless car might run over a child
By JERRY KAPLAN/ July 24, 2015 1:21 p.m. ET
As you try to imagine yourself cruising along in the self-driving car of the future, you may think first of the technical challenges: how an automated vehicle could deal with construction, bad weather or a deer in the headlights. But the more difficult challenges may have to do with ethics. Should your car swerve to save the life of the child who just chased his ball into the street at the risk of killing the elderly couple driving the other way? Should this calculus be different when it’s your own life that’s at risk or the lives of your loved ones?
Recent advances in artificial intelligence are enabling the creation of systems capable of independently pursuing goals in complex, real-world settings—often among and around people. Self-driving cars are merely the vanguard of an approaching fleet of equally autonomous devices. As these systems increasingly invade human domains, the need to control what they are permitted to do, and on whose behalf, will become more acute.
How will you feel the first time a driverless car zips ahead of you to take the parking spot you have been patiently waiting for? Or when a robot buys the last dozen muffins at Starbucks while a crowd of hungry patrons looks on? Should your mechanical valet be allowed to stand in line for you, or vote for you?
In the suburb where I live, downtown parking is limited to two hours during the day. The purpose of this rule is to broadly allocate a scarce resource and to promote the customer turnover critical to local businesses. Now imagine that I’m the proud owner of a fancy new autonomous car, capable of finding a spot and parking by itself. You might think that my car should be permitted to do anything that is legal for me to do—but in this case, should I be allowed to instruct it to repark itself every two hours?
Delegating my authority to the car undermines the intent of the law, precisely because it circumvents the cost intentionally imposed on me for the community’s greater good. We can certainly modify the rule to accommodate this new invention, but it is hard to see any general principles that we can apply across the board. We will need to examine each of our rules and adjust them on a case-by-case basis.
Then there is the problem of redesigning our public spaces. Within the next few decades, our stores, streets and sidewalks will likely be crammed with robotic devices fetching and delivering goods of every variety. How do we ensure that they respect the unstated conventions that people unconsciously follow when navigating in crowds?
A debate may erupt over whether we should share our turf with machines or banish them to separate facilities. Will it be “Integrate Our Androids!” or “Ban the Bots!”
And far more serious issues are on the horizon. Should it be permissible for an autonomous military robot to select its own targets? The current consensus in the international community is that such weapons should be under “meaningful human control” at all times, but even this seemingly sensible constraint is ethically muddled. The expanded use of such robots may reduce military and civilian casualties and avoid collateral damage. So how many people’s lives should be put at risk waiting for a human to review a robot’s time-critical kill decision?
Even if we can codify our principles and beliefs algorithmically, that won’t solve the problem. Simply programming intelligent systems to obey rules isn’t sufficient, because sometimes the right thing to do is to break those rules. Blindly obeying a posted speed limit of 55 miles an hour may be quite dangerous, for instance, if traffic is averaging 75, and you wouldn’t want your self-driving car to strike a pedestrian rather than cross a double-yellow centerline.
People naturally abide by social conventions that may be difficult for machines to perceive, much less follow. Finding the right balance between our personal interests and the needs of others—or society in general—is a finely calibrated human instinct, driven by a sense of fairness, reciprocity and common interest. Today’s engineers, racing to bring these remarkable devices to market, are ill-prepared to design social intelligence into a machine. Their real challenge is to create civilized robots for a human world.
—This essay is adapted from Mr. Kaplan’s new book, “Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence,” which will be published August 4 by Yale University Press
Article source: http://www.wsj.com/articles/can-we-create-an-ethical-robot-1437758519
How to build an ethical robot
Wednesday 16 March 2016
Many people assume that robots would have to be sentient before they could act ethically. But this is not the case, says Alan Winfield, Director of the Science Communication Unit at the University of the West of England.
“The robot behaves ethically not because it chooses to but because it’s programmed to do so,” he says. “We call it an ethical zombie.”
In this video for the World Economic Forum's IdeasLab series, Winfield poses the question: “If we can build even minimally ethical robots, are we morally compelled to do so?”
And with driverless cars just around the corner, it’s a question that we’re going to have to answer quite soon.
Article source : http://www.weforum.org/agenda/2016/03/how-to-build-an-ethical-robot
Robotics: Ethics of artificial intelligence
27 May 2015
Four leading researchers share their concerns and solutions
for reducing societal risks from intelligent machines.
Stuart Russell: Take a stand on AI weapons
Sabine Hauert: Shape the debate, don't shy from it
Russ Altman: Distribute AI benefits fairly
Manuela Veloso: Embrace a robot–human world
Stuart Russell: Take a stand on AI weapons
Professor of computer science, University of California, Berkeley
The artificial intelligence (AI) and robotics communities face an important ethical decision: whether to support or oppose the development of lethal autonomous weapons systems (LAWS).
Technologies have reached a point at which the deployment of such systems is — practically if not legally — feasible within years, not decades. The stakes are high: LAWS have been described as the third revolution in warfare, after gunpowder and nuclear arms.
Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans. LAWS might include, for example, armed quad copters that can search for and eliminate enemy combatants in a city, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.
Existing AI and robotics components can provide physical platforms, perception, motor control, navigation, mapping, tactical decision-making and long-term planning. They just need to be combined. For example, the technology already demonstrated for self-driving cars, together with the human-like tactical control learned by Deep-mind's DQN system, could support urban search-and-destroy missions.
Two US Defense Advanced Research Projects Agency (DARPA) programmes foreshadow planned uses of LAWS: Fast Lightweight Autonomy (FLA) and Collaborative Operations in Denied Environment (CODE). The FLA project will program tiny rotorcraft to manoeuvre unaided at high speed in urban areas and inside buildings. CODE aims to develop teams of autonomous aerial vehicles carrying out “all steps of a strike mission — find, fix, track, target, engage, assess” in situations in which enemy signal-jamming makes communication with a human commander impossible. Other countries may be pursuing clandestine programmes with similar goals.
International humanitarian law — which governs attacks on humans in times of war — has no specific provisions for such autonomy, but may still be applicable. The 1949 Geneva Convention on humane conduct in war requires any attack to satisfy three criteria: military necessity; discrimination between combatants and non-combatants; and proportionality between the value of the military objective and the potential for collateral damage. (Also relevant is the Martens Clause, added in 1977, which bans weapons that violate the “principles of humanity and the dictates of public conscience.”) These are subjective judgments that are difficult or impossible for current AI systems to satisfy.
The United Nations has held a series of meetings on LAWS under the auspices of the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland. Within a few years, the process could result in an international treaty limiting or banning autonomous weapons, as happened with blinding laser weapons in 1995; or it could leave in place the status quo, leading inevitably to an arms race.
As an AI specialist, I was asked to provide expert testimony for the third major meeting under the CCW, held in April, and heard the statements made by nations and non-governmental organizations. Several countries pressed for an immediate ban. Germany said that it “will not accept that the decision over life and death is taken solely by an autonomous system”; Japan stated that it “has no plan to develop robots with humans out of the loop, which may be capable of committing murder” (see go.nature.com/fwric1).
The United States, the United Kingdom and Israel — the three countries leading the development of LAWS technology — suggested that a treaty is unnecessary because they already have internal weapons review processes that ensure compliance with international law.
Almost all states who are party to the CCW agree with the need for 'meaningful human control' over the targeting and engagement decisions made by robotic weapons. Unfortunately, the meaning of 'meaningful' is still to be determined.
The debate has many facets. Some argue that the superior effectiveness and selectivity of autonomous weapons can minimize civilian casualties by targeting only combatants. Others insist that LAWS will lower the threshold for going to war by making it possible to attack an enemy while incurring no immediate risk; or that they will enable terrorists and non-state-aligned combatants to inflict catastrophic damage on civilian populations.
LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill — for example, they might be tasked to eliminate anyone exhibiting 'threatening behaviour'. The potential for LAWS technologies to bleed over into peacetime policing functions is evident to human-rights organizations and drone manufacturers.
In my view, the overriding concern should be the probable endpoint of this technological trajectory. The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them. For instance, as flying robots become smaller, their manoeuvrability increases and their ability to be targeted decreases. They have a shorter range, yet they must be large enough to carry a lethal payload — perhaps a one-gram shaped charge to puncture the human cranium. Despite the limits imposed by physics, one can expect platforms deployed in the millions, the agility and lethality of which will leave humans utterly defenceless. This is not a desirable future.
The AI and robotics science communities, represented by their professional societies, are obliged to take a position, just as physicists have done on the use of nuclear weapons, chemists on the use of chemical agents and biologists on the use of disease agents in warfare. Debates should be organized at scientific meetings; arguments studied by ethics committees; position papers written for society publications; and votes taken by society members. Doing nothing is a vote in favour of continued development and deployment.
Sabine Hauert: Shape the debate, don't shy from it
Lecturer in robotics, University of Bristol
Irked by hyped headlines that foster fear or overinflate expectations of robotics and artificial intelligence (AI), some researchers have stopped communicating with the media or the public altogether.
But we must not disengage. The public includes taxpayers, policy-makers, investors and those who could benefit from the technology. They hear a mostly one-sided discussion that leaves them worried that robots will take their jobs, fearful that AI poses an existential threat, and wondering whether laws should be passed to keep hypothetical technology 'under control'. My colleagues and I spend dinner parties explaining that we are not evil but instead have been working for years to develop systems that could help the elderly, improve health care, make jobs safer and more efficient, and allow us to explore space or beneath the oceans.
Experts need to become the messengers. Through social media, researchers have a public platform that they should use to drive a balanced discussion. We can talk about the latest developments and limitations, provide the big picture and demystify the technology. I have used social media to crowd-source designs for swarming nanobots to treat cancer. And I found my first PhD student through his nanomedicine blog.
The AI and robotics community needs thought leaders who can engage with prominent commentators such as physicist Stephen Hawking and entrepreneur–inventor Elon Musk and set the agenda at international meetings such as the World Economic Forum in Davos, Switzerland. Public engagement also drives funding. Crowdfunding for JIBO, a personal robot for the home developed by Cynthia Breazeal, at the Massachusetts Institute of Technology (MIT) in Cambridge, raised more than US$2.2 million.
There are hurdles. First, many researchers have never tweeted, blogged or made a YouTube video. Second, outreach is 'yet another thing to do', and time is limited. Third, it can take years to build a social-media following that makes the effort worthwhile. And fourth, engagement work is rarely valued in research assessments, or regarded seriously by tenure committees.
Training, support and incentives are needed. All three are provided by Robohub.org, of which I am co-founder and president. Launched in 2012, Robohub is dedicated to connecting the robotics community to the public. We provide crash courses in science communication at major AI and robotics conferences on how to use social media efficiently and effectively. We invite professional science communicators and journalists to help researchers to prepare an article about their work. The communicators explain how to shape messages to make them clear and concise and avoid pitfalls, but we make sure the researcher drives the story and controls the end result. We also bring video cameras and ask researchers who are presenting at conferences to pitch their work to the public in five minutes. The results are uploaded to YouTube. We have built a portal for disseminating blogs and tweets, amplifying their reach to tens of thousands of followers.
“Through social media, researchers have a public platform that they should use to drive a balanced discussion.”
I can list all the benefits of science communication, but the incentive must come from funding agencies and institutes. Citations cannot be the only measure of success for grants and academic progression; we must also value shares, views, comments or likes. MIT robotics researcher Rodney Brooks's classic 1986 paper on the 'subsumption architecture', a bio-inspired way to program robots to react to their environment, gathered nearly 10,000 citations in 30 years (R. Brooks IEEE J. Robot. Automat. 2, 14–23; 1986). A video of Sawyer, a robot developed by Brooks's company Rethink Robotics, received more than 60,000 views in one month (see go.nature.com/jqwfmz). Which has had more impact on today's public discourse?
Governments, research institutes, business-development agencies, and research and industry associations do welcome and fund outreach and science-communication efforts. But each project develops its own strategy, resulting in pockets of communication that have little reach.
In my view, AI and robotics stakeholders worldwide should pool a small portion of their budgets (say 0.1%) to bring together these disjointed communications and enable the field to speak more loudly. Special-interest groups, such as the Small Unmanned Aerial Vehicles Coalition that is promoting a US market for commercial drones, are pushing the interests of major corporations to regulators. There are few concerted efforts to promote robotics and AI research in the public sphere. This balance is badly needed.
A common communications strategy will empower a new generation of roboticists that is deeply connected to the public and able to hold its own in discussions. This is essential if we are to counter media hype and prevent misconceptions from driving perception, policy and funding decisions.
Russ Altman: Distribute AI benefits fairly
Professor of bioengineering, genetics, medicine and computer science, Stanford University
Artificial intelligence (AI) has astounding potential to accelerate scientific discovery in biology and medicine, and to transform health care. AI systems promise to help make sense of several new types of data: measurements from the 'omics' such as genomics, proteomics and metabolomics; electronic health records; and digital-sensor monitoring of health signs.
Clustering analyses can define new syndromes — separating diseases that were thought to be the same and unifying others that have the same underlying defects. Pattern-recognition technologies may match disease states to optimal treatments. For example, my colleagues and I are identifying groups of patients who are likely to respond to drugs that regulate the immune system on the basis of clinical and transcriptomic features.
In consultations, physicians might be able to display data from a 'virtual cohort' of patients who are similar to the one sitting next to them and use it to weigh up diagnoses, treatment options and the statistics of outcomes. They could make medical decisions interactively with such a system or use simulations to predict outcomes on the basis of the patient's data and that of the virtual cohort.
“AI technologies could exacerbate existing health-care disparities and create new ones.”
I have two concerns. First, AI technologies could exacerbate existing health-care disparities and create new ones unless they are implemented in a way that allows all patients to benefit. In the United States, for example, people without jobs experience diverse levels of care. A two-tiered system in which only special groups or those who can pay — and not the poor — receive the benefits of advanced decision-making systems would be unjust and unfair. It is the joint responsibility of the government and those who develop the technology and support the research to ensure that AI technologies are distributed equally.
Second, I worry about clinicians' ability to understand and explain the output of high-performance AI systems. Most health-care providers will not accept a complex treatment recommendation from a decision-support system without a clear description of how and why it was reached.
Unfortunately, the better the AI system, the harder it often is to explain. The features that contribute to probability-based assessments such as Bayesian analyses are straightforward to present; deep-learning networks, less so.
AI researchers who create the infrastructure and technical capabilities for these systems need to engage doctors, nurses, patients and others to understand how they will be used, and used fairly.
Manuela Veloso: Embrace a robot–human world
Professor of computer science, Carnegie Mellon University
Humans seamlessly integrate perception, cognition and action. We use our sensors to assess the state of the world, our brains to think and choose actions to achieve objectives, and our bodies to execute those actions. My research team is trying to build robots that are capable of doing the same — with artificial sensors (cameras, microphones and scanners), algorithms and actuators, which control the mechanisms.
But autonomous robots and humans differ greatly in their abilities. Robots may always have perceptual, cognitive and actuation limitations. They might not be able to fully perceive a scene, recognize or manipulate any object, understand all spoken or written language, or navigate in any terrain. I think that robots will complement humans, not supplant them. But robots need to know when to ask for help and how to express their inner workings.
To learn more about how robots and humans work together, for the past three years we have shared our laboratory and buildings with four collaborative robots, or CoBots, which we developed. The robots look a bit like mechanical lecterns. They have omnidirectional wheels that enable them to steer smoothly around obstacles; camera and lidar systems to provide depth vision; computers for processing; screens for communication; and a basket to carry things in.
Early on, we realized how challenging real environments are for robots. The CoBots cannot recognize every object they encounter; lacking arms or hands they struggle to open doors, pick things up or manipulate them. Although they can use speech to communicate, they may not recognize or understand the meaning of words spoken in response.
We introduced the concept of 'symbiotic autonomy' to enable robots to ask for help from humans or from the Internet. Now, robots and humans in our building aid one another in overcoming the limitations of each other.
CoBots escort visitors through the building or carry objects between locations, gathering useful information along the way. For example, they can generate accurate maps of spaces, showing temperature, humidity, noise and light levels, or WiFi signal strength. We help the robots to open doors, press lift buttons, pick up objects and follow dialogue by giving clarifications.
There are still hurdles to overcome to enable robots and humans to co-exist safely and productively. My team is researching how people and robots can communicate more easily through language and gestures, and how robots and people can better match their representations of objects, tasks and goals.
We are also studying how robot appearance enhances interactions, in particular how indicator lights may reveal more of a robot's inner state to humans. For instance, if the robot is busy, its lights may be yellow, but when it is available they are green.
Although we have a way to go, I believe that the future will be a positive one if humans and robots can help and complement each other.
Article Source : http://www.nature.com/news/robotics-ethics-of-artificial-intelligence-1.17611
<Questions>
Q1. How do you think about AI tech.?
Q2. Did you watch historic match between Lee Se-dol and AlphaGO? How did you feel about that?
Q3. Do you think android can be ethical?
Q4. Why do we need robots? Why do we need AI technology?
Q5. Do you think AI can surpass the ability of human being?
Q6. Did you watch the movie Bicentennial man? As described in this film, can we build up robots with humanitarian aspects?
Q7. What is the merits and demerits of Artificial intelligence?
Q8. When many jobs are substituted by AI tech. in the future, what would happen in our society and human life? What the human being should prepare for this future?
Q9. What is the safer jobs from this technology development?
|