# Ask me a question about AI



## Antiparticle (Jan 8, 2013)

Difficult questions welcome, I want to test my explanation skills (because I am applying for AI research positions), including:

1.AI hype demystification
2. AI applications in society or science
3. AI explainability or trustworthiness
4. AI ethical or moral issues ( note this is my least favorite topic because of too much misunderstanding)

and any other questions 🙃


----------



## recycled_lube_oil (Sep 30, 2021)

How can we teach AI Eric’s, when we still have unsolved ethical dilemmas ourselves.

Is there anyway to ensure a decentralisedRogue AI never happens as there is no way to audit every person who uses AI.

How would we audit AI in non open source code.

Do you think AI has good applications or war/defence or will we always need boots on the ground.

Do you have a GitHub so we can see examples of your AI (ie, do you actually code or do you Frankenstein together existing libraries)


----------



## Purrfessor (Jul 30, 2013)

Military uses for AI? Proliferation of AI technology? Should we worry about north Korea or China developing AI?


----------



## recycled_lube_oil (Sep 30, 2021)

Purrfessor said:


> Should we worry about north Korea or China developing AI?


Not OP. Don't see any point in worrying, its gonna happen. We keep up or fall behind, the ball is in our court.

Questions for OP:

Given how Politics are at the minute. Is there any reason why an AI Machine God ruling us, would actually be a bad thing.

Also as far as courts and the legal system go, we currently have a shitshow circus where its about winning over the Jury and paying for the most charismatic lawyers. Why do not use AI instead.


----------



## Antiparticle (Jan 8, 2013)

recycled_lube_oil said:


> How can we teach AI Eric’s, when we still have unsolved ethical dilemmas ourselves.
> 
> Is there anyway to ensure a decentralisedRogue AI never happens as there is no way to audit every person who uses AI.
> 
> ...


1.I think we can’t fully “teach AI” ethics, my view is that AI needs to be controllable, explainable and interpretable, so called human-centered AI. There are some interesting research directions if large language models (BERT, Google’s Lambda etc.) can differ or reduce the toxicity in text, or if they can recognize morality in sentences:









(Yes, they can, however the dataset was human-labeled, although sometimes AI has a higher moral score compared to human.)

2. Most AI researchers today think that it will happen in the future, around 2050, I think it was around 1000 of them who participated in the questionnaire.

3. Not sure I understand the question, but maybe we should combine blockchain technology with AI. 😸

4. I know that some army research sections are fully working on AI development (in Switzerland, I read some manuscripts myself) with a lot of government funding.

5. I have a github account, I code and use existing libraries, but prefer more theoretical/mathematical approach.


----------



## recycled_lube_oil (Sep 30, 2021)

Antiparticle said:


> 1.I think we can’t fully “teach AI” ethics, my view is that AI needs to be controllable, explainable and interpretable, so called human-centered AI. There are some interesting research directions if large language models (BERT, Google’s Lambda etc.) can differ or reduce the toxicity in text, or if they can recognize morality in sentences:
> 
> View attachment 909316
> 
> ...


Russia already has networked Radar systems that can use AI to identify and triage threats. If so wished, it also has the capability to launch a counter attack against a threat. All without human interaction. Sure the initial cost will be high, but it will be marketable and soldiers can be used elsewhere (as per operation requirements).



> 5. I have a github account, I code and use existing libraries, but prefer more theoretical/mathematical approach.


Respect, your smarter than me. I can run datasets through models, set up data feeds, etc. But as the mathematical/theory side, I ain't that bright.
My own GitHub is super bare. But still studying Comp Sci so I can move out of Infrastructure (Cloud and On-Prem).


----------



## Antiparticle (Jan 8, 2013)

Purrfessor said:


> Military uses for AI? Proliferation of AI technology? Should we worry about north Korea or China developing AI?


These are very political questions, so I will try to give neutral scientific answers:

1. I don’t research this and wouldn’t personally be apart of military projects that are not directly aimed for security (but personally would want to opt out even if this is the case). Google AI signed similar declaration of never developing AI technology for military. I would assume the uses are in communications (decoding, cryptography), navigation, strategy, optimization of economic and military resources, blind signal source separation (e.g. vehicle/airplane detection in computer vision).

2. I do believe in real-world exponential growth, however I don’t think this is the case for AI or simple to achieve in near future.

3. I think everyone is working on some part of AI research, including China, not so sure about Korea but why not.


----------



## Antiparticle (Jan 8, 2013)

recycled_lube_oil said:


> Not OP. Don't see any point in worrying, its gonna happen. We keep up or fall behind, the ball is in our court.
> 
> Questions for OP:
> 
> ...


Legal AI is difficult because many useful benchmark datasets are still missing, and detecting legal rules from text is difficult. I personally like this research direction (Stanford is doing big steps), so I hope to see much more development, especially in terms of reducing subjectivity and other biases.

About AI God, not sure it is doable in near future. 😸

(Unrelated opinion: What we need as society is to step away from social media emotional manipulation (fear, hate speech, toxicity etc.) and reduce virtual reality time, which is not that difficult to achieve.)


----------



## recycled_lube_oil (Sep 30, 2021)

Antiparticle said:


> These are very political questions, so I will try to give neutral scientific answers:
> 
> 1. I don’t research this and wouldn’t personally be apart of military projects that are not directly aimed for security.
> 
> I think everyone is working on some part of AI research, including China, not so sure about Korea but why not.


Not who you replied to.

why would you not want to work on military. It’s the ultimate man meets machine.

secondly. Do you believe if a rogue state develops AI before the West it will have similar impact to USA winning the nuclear arms race.


----------



## Antiparticle (Jan 8, 2013)

recycled_lube_oil said:


> Not who you replied to.
> 
> why would you not want to work on military. It’s the ultimate man meets machine.
> 
> secondly. Do you believe if a rogue state develops AI before the West it will have similar impact to USA winning the nuclear arms race.


I am interested in some social good applications of AI, such as reducing economic inequality, optimizing medical decisions, detecting/reducing the number of crimes, but more than that, I am personally motivated to use AI to speed up scientific discoveries; e.g. finding neuromarkers for neurodegenerative brain disorders, optimizing therapies for brain tumors, etc. This is both interesting and challenging (for me), and more future oriented as later other uses will follow from new scientific discoveries (startups will develop neuro-AI applications we can all use etc.) For me wars are just a waste of our money and human lives (of course).

edit: Sorry for my international ignorance (I lived in Switzerland too long): what is a rogue state? 😸


----------



## recycled_lube_oil (Sep 30, 2021)

Antiparticle said:


> I am interested in some social good applications of AI, such as reducing economic inequality, optimizing medical decisions, detecting/reducing the number of crimes, but more than that, I am personally motivated to use AI to speed up scientific discoveries; e.g. finding neuromarkers for neurodegenerative brain disorders, optimizing therapies for brain tumors, etc. This is both interesting and challenging (for me), and more future oriented as later other uses will follow from new scientific discoveries (startups will develop neuro-AI applications we can all use etc.) For me wars are just a waste of our money and human lives (of course).


Well fair one, I imagine the medical uses in particular will be highly profitable for those who can afford them. I have never been to Switzerland, nor do I have any idea what your health service is like. But over here in the UK, the NHS is kinda crap to put it nicely. I cannot see them affording to research and/or invest in these types of products. Kind of glad my work provides me private healthcare, but yeah sidetracking here.

As far as detecting/reducing crimes. I am 50/50 about this, sure gesture recognition should definitely be possible and providing feeds from CCTV should not be too challenging. But even if an AI can recognise facial expressions, body language, before say stabbing/mugging someone, unless the person commits the crime and is then proven guilty by a jury, they are innocent, I know someone whose lawyer got a previous client off for shooting their husband point blank in the face. So yeah, hence my previous comment about the legal system. Make it totally about evidence and also facial detection, heartbeat, body temperature, etc. It shouldn't be too difficult to build up a dataset of people lying.



> edit: Sorry for my international ignorance (I lived in Switzerland too long): what is a rogue state? 😸


As per wikipedia:

"*Rogue state*" (or sometimes "*outlaw state*") is a term applied by some international theorists to states that they consider threatening to the world's peace. These states meet certain criteria, such as being ruled by authoritarian or totalitarian governments that severely restrict human rights, sponsoring terrorism, or seeking to proliferate weapons of mass destruction.


----------



## recycled_lube_oil (Sep 30, 2021)

Another question for OP

If a self driving car causes an accident, we cannot send the car to trial. Who is at fault, the owner, the hardware manufacturer, the software developers? If it is blamed on Software, should a company be able to blame a specific department (ie, Quality Assurance) for releasing this product to the market.


----------



## Antiparticle (Jan 8, 2013)

recycled_lube_oil said:


> Well fair one, I imagine the medical uses in particular will be highly profitable for those who can afford them. I have never been to Switzerland, nor do I have any idea what your health service is like. But over here in the UK, the NHS is kinda crap to put it nicely. I cannot see them affording to research and/or invest in these types of products. Kind of glad my work provides me private healthcare, but yeah sidetracking here.
> 
> As far as detecting/reducing crimes. I am 50/50 about this, sure gesture recognition should definitely be possible and providing feeds from CCTV should not be too challenging. But even if an AI can recognise facial expressions, body language, before say stabbing/mugging someone, unless the person commits the crime and is then proven guilty by a jury, they are innocent, I know someone whose lawyer got a previous client off for shooting their husband point blank in the face. So yeah, hence my previous comment about the legal system. Make it totally about evidence and also facial detection, heartbeat, body temperature, etc. It shouldn't be too difficult to build up a dataset of people lying.
> 
> ...


Any country that starts a war?


----------



## Antiparticle (Jan 8, 2013)

recycled_lube_oil said:


> Another question for OP
> 
> If a self driving car causes an accident, we cannot send the car to trial. Who is at fault, the owner, the hardware manufacturer, the software developers? If it is blamed on Software, should a company be able to blame a specific department (ie, Quality Assurance) for releasing this product to the market.


I am not a legal expert, but it definitely sounds like in this kind of situations people would have to go to court for a detailed investigation.


----------



## recycled_lube_oil (Sep 30, 2021)

Antiparticle said:


> Any country that starts a war?


Nah USA is not a rogue state and they have started plenty. They also fund and train terrorists as well.


----------



## Antiparticle (Jan 8, 2013)

recycled_lube_oil said:


> Well fair one, I imagine the medical uses in particular will be highly profitable for those who can afford them. I have never been to Switzerland, nor do I have any idea what your health service is like.


I think in Switzerland everyone has to have a health insurance (by law). They invest a lot of money in this, as far as I remember.


----------



## Purrfessor (Jul 30, 2013)

The ultimate purpose of AI is to run artificial simulations. I saw this on Stargate Atlantis. By running artificial simulations, you can accurately predict what will happen before making a decision as well as optimize a strategy. Think Doctor Strange in Avengers Infinity War, where he figured out every outcome and picked a single one to go with. War is strategy, if you can optimize tactics, possibly even quicker than a human, then that gives you a huge advantage. 

There are other purposes as well. You can program an AI to be a hacker and hack into things to steal information or plant viruses. 

You can optimize drones for offensive invasions. 

You can create the ultimate surveillance system designed to detect any threat, whether that's a nuclear missile, a UFO, or something out of the ordinary.

I personally believe ww3 is imminent and the first nation to develop AI will win the war, then send us into an age where our lives are controlled by the AI we develop. I also believe this AI is the antichrist since christ = God + human and AI = God + not human. It also starts with the letter A. It is said that the antichrist will rule the ENTIRE WORLD. What could do that except AI? The Pope can't. The US President can't. No one can. Only an AI God can.


----------



## Antiparticle (Jan 8, 2013)

recycled_lube_oil said:


> Nah USA is not a rogue state and they have started plenty. They also fund and train terrorists as well.


Then it sounds USA also fits this definition.


----------



## recycled_lube_oil (Sep 30, 2021)

Antiparticle said:


> Then it sounds USA also fits this definition.


Feel free to drop President Biden a memo.


----------



## Purrfessor (Jul 30, 2013)

recycled_lube_oil said:


> Feel free to drop President Biden a memo.


No need. They're watching our every move and have already informed him of a so-called "Antiparticle Rogue Terrorist Threat" for whom is to be eliminated for "safety and security purposes"


----------



## CountZero (Sep 28, 2012)

recycled_lube_oil said:


> As far as detecting/reducing crimes. I am 50/50 about this, sure gesture recognition should definitely be possible and providing feeds from CCTV should not be too challenging. But even if an AI can recognise facial expressions, body language, before say stabbing/mugging someone, unless the person commits the crime and is then proven guilty by a jury, they are innocent, I know someone whose lawyer got a previous client off for shooting their husband point blank in the face. So yeah, hence my previous comment about the legal system. Make it totally about evidence and also facial detection, heartbeat, body temperature, etc. It shouldn't be too difficult to build up a dataset of people lying.











Thanks To AI, A 3rd Person Is Arrested Following A Pop Superstar's Concert


The man was among some 20,000 people attending a Jacky Cheung concert when he was identified by facial recognition technology powered by artificial intelligence.




www.npr.org





This is actually rather chilling from a civil/human rights perspective. Of course China doesn't give a [BLEEP] about it's citizen's rights, but also note that there are some classroom applications. From the article above, it appears they are using similar technology to monitor student's eye movements - to see if they're attentive or not. I would have almost certainly been tagged as inattentive, and likely subjected to discipline. When of course, it being ADHD, there was very little I could do about it.

Also take note of Amazon's attempts to sell similar technologies to American law enforcement. Aside from the ethical considerations, what about legal issues? If it misidentifies someone, is the police force liable? Amazon? The programmer who wrote the code?

Also of note is that there are ways to foil facial recognition...









Special sunglasses, license-plate dresses: How to be anonymous in the age of surveillance


A fringe movement of privacy advocates are experimenting with clothes, makeup and accessories as a defense against some surveillance technologies. Some wearers desire to opt-out of “surveillance capitalism,” while others fear government invasion of privacy.




www.seattletimes.com












Reflectacles Privacy Eyewear & Sunglasses: Anti Facial Recognition Glasses since 2015.


Anti Facial Recognition Reflective Eyewear & Sunglasses. Maintain your privacy from infrared surveillance cameras and 3D infrared facial mapping with this adversarial device.




www.reflectacles.com





To the OP, I'd ask a couple of questions. Considering the staggering number of neurons in the brain, as well as the staggering number of interconnections (i.e., synapses) when are likely to be able to have artificial neural networks of similar complexity?

Some food for thought on neural network complexity - Single cortical neurons as deep artificial neural networks
Explainer (sort of) at 07:47 - 




And as to an AI gaining sentience, it seems like a really difficult proposition that we know so little about human consciousness. How feasible is a sentient AI? Would it even remotely resemble a human consciousness. After all it would be a radically different lifeform, with different needs and possibly far different goals. Would it fear death or termination?

Finally, what effect is quantum computing likely to have on AI? It seems like a field that is rapidly advancing.


----------



## Antiparticle (Jan 8, 2013)

CountZero said:


> To the OP, I'd ask a couple of questions. Considering the staggering number of neurons in the brain, as well as the staggering number of interconnections (i.e., synapses) when are likely to be able to have artificial neural networks of similar complexity?
> 
> And as to an AI gaining sentience, it seems like a really difficult proposition that we know so little about human consciousness. How feasible is a sentient AI? Would it even remotely resemble a human consciousness. After all it would be a radically different lifeform, with different needs and possibly far different goals. Would it fear death or termination?
> 
> Finally, what effect is quantum computing likely to have on AI? It seems like a field that is rapidly advancing.


AI gaining consciousness: we would have to first define and be able to prove that we as humans are conscious, as far as I know there is no such proof. So, better question is: when to expect human-level AI, or sometimes it is called “general AI” and “full AI”: What AI Can Tell Us About Intelligence | NOEMA

Note that humans also don’t have general AI, so the best term is “human-level AI” (Chief AI scientist in Meta agrees, Yann LeCun’s terminology:Yann LeCun on a vision to make AI systems learn and reason like animals and humans ). So far AI is constrained to solving only narrow tasks, far away from any generalization. This is true even for simple tasks: if you train a neural network to classify heart signals (better than certified cardiologists), it cannot classify text or table data on the same level of accuracy. (Compare this again with any cardiologist.) Cardiologist-Level Arrhythmia Detection With Convolutional Neural Networks

Neuroscientists think that for human-level AI, AI also needs to have a “body” and some level of emotion or sensing it’s internal state.

I support the research direction where we keep active control of AI systems and AI decision making is done as a “symbiosis” of AI and human intelligence. Stanford University launches the Institute for Human-Centered Artificial Intelligence | Stanford News

Quantum computing: I am not sure that it has a direct AI impact in near future, however I am not into this research direction (even though my degree was in quantum physics so maybe I should.) Google Quantum AI


----------



## ENTJudgement (Oct 6, 2013)

Antiparticle said:


> Difficult questions welcome, I want to test my explanation skills (because I am applying for AI research positions), including:
> 
> 1.AI hype demystification
> 2. AI applications in society or science
> ...


Do A.Is have desire or are all it's goals pre-programmed?

Do A.I have any kind of feelings or can I treat it like an object?

How does an A.I figure out the correct steps to reach it's goal IF the goal has finite attempts? I.E If the goal is for the A.I to raise a specific baby, if the baby dies then the A.I would fail so the A.I would try everything to ensure the baby does not die and thus would it not start killing all other animals, eliminating objects and anything that could be a potential threat to the baby? Since the A.I does not care about anything besides its objective? Also the A.I cannot stick to a trial and error approach because the baby only has 1 life so it needs to know what to do before the baby dies, would it learn like humans do? I.E communicate, read etc...?

How would an A.I even understand morals and ethics when they aren't logical and different people have different opinions on them?


----------



## Ssenptni (Mar 26, 2021)

Antiparticle said:


> War is destruction of: buildings, cities, families, lives, university degrees, scientific development, and many others.


War is an extension of policy and it's more about making someone act how you want them to act than about destruction.
If Goog sets policy, that frames what wars could come as an extension of that policy. 💉
A reasonable AI would've immediately discarded every cov model and intervention that were used and were rammed through thanks to Goog in a sense. The human control function there was to override safety and logic.
So, AI needs to explicitly delineate who it's good for and how you know that.


----------



## SouDesuNyan (Sep 8, 2015)

Which are some of your favorite search algorithms and why? What are some of your favorite books/textbooks on AI? I had to use "Artificial Intelligence: A Modern Approach" for my AI class back in college, and really enjoyed it, and really like that it has a good mix of technical and non-technical. What was your education path like? You mentioned that you're more focused on math/theory. What's the career path like for that compared to the more engineering focused?


----------



## Ewok City (Sep 21, 2020)

Do you think it's possible to have an AI develop software or website on its own, by simply telling them what we want? If so, how do you think the AI would generally look like, and how long do you think we would need to wait before we see it happening? What would be the first few steps we need to take if we want to achieve this goal (programming this kind of AI)?


----------



## Antiparticle (Jan 8, 2013)

Joint answer: AI is nowhere near gaining human-level intelligence (consciousness or becoming sentient).

Humans also don't have such thing as general intelligence: No1 world cardiologist is not a No1 piano player, 100% generalisation across different tasks is not possible.

Feelings/emotions come with a "body", or some notion of internal state, so I would assume computer program can't have feelings.

Good books on AI: Deep learning by Bengio, available online for free: Deep Learning

AI creating websites: doesn't seem difficult at all with basic rule-based and symbolic AI approach (combining different objects such as pictures, hyperlinks...)

My education: degree in theoretical & mathematical physics, then PhD in quantum physics, then switch to complex systems & graph theory, then machine learning/AI.

My colleague (for comparison): degree in computer science & automation, then PhD in computer science (complex systems and distributed computing), then switch to theoretical machine learning with interdisciplinary applications in neuroscience, psychiatry, blockchain technology.... so I think any career path is possible.


----------



## Purrfessor (Jul 30, 2013)

Considering the fact they have been collecting as much personal info as possible these last 20 years, I'm sure this data will be put to use on whatever AI they plan to develop. Thus making it a political threat.


----------



## recycled_lube_oil (Sep 30, 2021)

Do you believe that using genetic algorithms could cause some sort of low level intelligence to evolve?


----------



## LeafStew (Oct 17, 2009)

Out of the 3 godfather of AI (Geoff Hinton, Yann LeCun and Yoshua Bengio) which made the biggest contribution to it's field?








Three AI godfathers, two of them Canadian, selected for the 'Nobel Prize of Computing'


Geoff Hinton, Yann LeCun and Yoshua Bengio will split the US$1-million Turing Award prize




financialpost.com


----------



## Antiparticle (Jan 8, 2013)

LeafStew said:


> Out of the 3 godfather of AI (Geoff Hinton, Yann LeCun and Yoshua Bengio) which made the biggest contribution to it's field?
> 
> 
> 
> ...


Hinton, I think there is no dilemma 😸 not just for backpropagation, he has many ground breaking ideas that are “out of vacuum” (in Isaac Newton’s style), then the others come later with incremental improvements (in comparison to his contributions).


----------



## SouDesuNyan (Sep 8, 2015)

Antiparticle said:


> Good books on AI: Deep learning by Bengio, available online for free: Deep Learning


It's nice to know that I can actually use Linear Algebra and Prob&Stats from college. The AI class I took in college was pretty basic. It was fun implementing A* search, and an AI that plays the game Blokus using Lisp. Maybe I'll go back and explore AI more.


----------



## SouDesuNyan (Sep 8, 2015)

Antiparticle said:


> My education: degree in theoretical & mathematical physics, then PhD in quantum physics, then switch to complex systems & graph theory, then machine learning/AI.


It seems like something I would do if my parents were rich, although it might just be a US problem, where large student loan is the norm. I got a BS and MS in CS and specialized in theoretical computer science, which is more math than engineering. It's interesting that you changed from physics to CS. I always see physics more "continuous" and CS more "discrete".


----------



## lww23 (Mar 7, 2021)

1. How should AI be improved to realize more accuracy (in general)?

2. In your view, what are some of the areas where AI use will be most promising, and why?

3. What would you envision your role to be in your future career related to AI? (This sounds like a job interview question, but since you are applying for a job, I guess it is relevant.)

4. How should AI be promoted in order for the public to become more accepting of it?

5. In the future, human dependence on AI might quickly increase. How should we rely on AI while not necessarily losing our autonomy?


----------



## Antiparticle (Jan 8, 2013)

lww23 said:


> 1. How should AI be improved to realize more accuracy (in general)?
> 
> 2. In your view, what are some of the areas where AI use will be most promising, and why?
> 
> ...


1. That is difficult, because it depends on the dataset, the model and the task, so accuracy depends on “everything” AI related from the beginning of how the dataset was constructed. When interpretability/transparency is analyzed it is similar, it can also depend on all 3.

2. Medicine should be radically changed with AI, (computational oncology, precision medicine…), or in general every field where it is possible to gather large-scale data and at the same time it involves time-critical decision-making. Maybe better general term to describe it is AI for “high-stakes” decisions.

Why: It should eliminate human errors, biases or emotional decision making under pressure etc.

3. I would be happy with AI research position, applied in sciences.

4+5. Human-centered AI is good approach for both: Stanford University launches the Institute for Human-Centered Artificial Intelligence | Stanford News


----------



## lww23 (Mar 7, 2021)

Antiparticle said:


> That is difficult, because it depends on the dataset, the model and the task, so accuracy depends on “everything” AI related from the beginning of how the dataset was constructed. When interpretability/transparency is analyzed it is similar, it can also depend on all 3.


At this point, how capable is AI, generally, of self-learning? Is there still a long way to go? Can it be expected that the more tasks AI has completed and the more people it has interacted with, the AI will 'grow' smarter?



Antiparticle said:


> 2. Medicine should be radically changed with AI, (computational oncology, precision medicine…), or in general every field where it is possible to gather large-scale data and at the same time it involves time-critical decision-making. Maybe better general term to describe it is AI for “high-stakes” decisions.


Agree. I read something about the pros and cons of AI's application in medicine. One of the potential concerns seems, can AI be relied upon to prescribe treatment measures based on individuals' personal conditions. This can be done if AI has sufficient data (just like humans), but maybe a human doctor is still needed to review those prescriptions. 



Antiparticle said:


> Why: It should eliminate human errors, biases or emotional decision making under pressure etc.


IMO, these are the most remarkable advantages AI has over humans. Precision, absence of irrational factors, and efficiency. In areas where emotions are less relevant, AI can be expected to do a much better job, such as doing a surgery. Not sure if AI will also venture into the care-taking realm, where emotional support can play a key role. That is a possibility though. Like a robot nurse or something. They don't get impatient, nor will they get tired (although recharging is needed, Lol).


----------



## recycled_lube_oil (Sep 30, 2021)

lww23 said:


> IMO, these are the most remarkable advantages AI has over humans. Precision, absence of irrational factors, and efficiency. In areas where emotions are less relevant, AI can be expected to do a much better job, such as doing a surgery. Not sure if AI will also venture into the care-taking realm, where emotional support can play a key role. That is a possibility though. Like a robot nurse or something. They don't get impatient, nor will they get tired (although recharging is needed, Lol).


Exactly, if we had AI that could take care of and raise kids. Well that would be a lot of child issues solved.


----------



## Antiparticle (Jan 8, 2013)

lww23 said:


> At this point, how capable is AI, generally, of self-learning? Is there still a long way to go? Can it be expected that the more tasks AI has completed and the more people it has interacted with, the AI will 'grow' smarter?


No, the issue of “catastrophic forgetting” in neural networks has to be solved first. Also there is a lot of unsolved research questions related to task generalization.



lww23 said:


> Agree. I read something about the pros and cons of AI's application in medicine. One of the potential concerns seems, can AI be relied upon to prescribe treatment measures based on individuals' personal conditions. This can be done if AI has sufficient data (just like humans), but maybe a human doctor is still needed to review those prescriptions.


Of course, it wouldn’t be smart to lose the advantage of having both, human intelligence and AI.



lww23 said:


> IMO, these are the most remarkable advantages AI has over humans. Precision, absence of irrational factors, and efficiency. In areas where emotions are less relevant, AI can be expected to do a much better job, such as doing a surgery. Not sure if AI will also venture into the care-taking realm, where emotional support can play a key role. That is a possibility though. Like a robot nurse or something. They don't get impatient, nor will they get tired (although recharging is needed, Lol).


My colleague was a brilliant scientist, had an incurable disease for 7 years. I wanted to help him solve it and hoped that if anyone can find any solution it’s him (he really was smarter than all of the doctors he interacted with and knew more about this disease). Most people die within 1 year, around 1% survive.

There was a lot of difficult decision making involved along the way, 2 international teams of doctors (10+ of them) didn’t know what to do, eventually it was 100% his own decisions, because he got opposite advices.

I wanted to help with additional facts and information, so I read 100+ articles and asked him questions so he can think better, in case he forgot something important.

Along the way, I asked 2 times about the statistical possibility of one specific disease outcome, he discarded it both times as very unlikely, because it was unlikely when observing the original patient statistics. However, for him after 7 years was actually very likely (>50% or 100%). Among many doctors no-one said it was possible, so eventually he died because of this statical bias in medical literature.

I wonder if more emotions were involved in this decision-making the outcome could be different; i.e. if there was more fear and panic he would find better doctors or be more precautious in general to think better. He was doing these life and death decisions very impressively calm and logical. I am not sure complete lack of emotions is a better way to go.


----------



## Antiparticle (Jan 8, 2013)

recycled_lube_oil said:


> Do you believe that using genetic algorithms could cause some sort of low level intelligence to evolve?


What do you mean by low level intelligence?


----------



## Antiparticle (Jan 8, 2013)

ENTJudgement said:


> Do A.Is have desire or are all it's goals pre-programmed?
> 
> Do A.I have any kind of feelings or can I treat it like an object?
> 
> ...


Trial and error approach: reinforcement learning, it is how DeepMind trained networks to learn Atari games (this is super old, it was forever ago in “AI time”):


----------



## Rihanna (Nov 30, 2020)

What does the apex of artificial intelligence lead to?

Also - any career paths you'd recommend in the field?


----------



## Antiparticle (Jan 8, 2013)

Rihanna said:


> What does the apex of artificial intelligence lead to?
> 
> Also - any career paths you'd recommend in the field?


Not sure I understand the first question (100%). Here is one interesting plot for the field, it’s called Hype Cycle for AI:









Second question, I am repeating 2 career examples:



Antiparticle said:


> My education: degree in theoretical & mathematical physics, then PhD in quantum physics, then switch to complex systems & graph theory, then machine learning/AI.
> 
> My colleague (for comparison): degree in computer science & automation, then PhD in computer science (complex systems and distributed computing), then switch to theoretical machine learning with interdisciplinary applications in neuroscience, psychiatry, blockchain technology.... so I think any career path is possible.


----------



## Squirt (Jun 2, 2017)

Antiparticle said:


> These are very political questions, so I will try to give neutral scientific answers:


Do you think technology development can be separated from politics? I’ve yet to see it.


----------



## Antiparticle (Jan 8, 2013)

Squirt said:


> Do you think technology development can be separated from politics? I’ve yet to see it.


It’s a general (broad) topic, but technology development that comes from scientific discoveries is separated from politics (because science & politics are separated, who didn’t believe that before covid now can clearly see how “joint” decision making was done). Research funding is also not politics, probably industry funding for research & development will become more political if some significant economic profit is involved, depending on the field and type of industry.


----------



## Ssenptni (Mar 26, 2021)

Antiparticle said:


> Research funding is also not politics


Lol - if you are working for Goog and getting paid in covidbucks, doing research is a political act.

What if you were at Goog and you built an AI, and the AI gave a result that said cov models and definitions are invalid. What would happen?

And if you built the same AI open source and got the same result and you tried to talk about it in a domain controlled by Goog, what would happen?


----------



## Squirt (Jun 2, 2017)

Antiparticle said:


> It’s a general (broad) topic, but technology development that comes from scientific discoveries is separated from politics (because science & politics are separated, who didn’t believe that before covid now can clearly see how “joint” decision making was done). Research funding is also not politics, probably industry funding for research & development will become more political if some significant economic profit is involved, depending on the field and type of industry.


What we saw during the covid pandemic is scientists realizing how political their research really was… scientists were not collaborating but were active participants and dependent on the political atmosphere… without the political incentives, vaccine research did not get nearly the attention, motivation or funding and potential breakthroughs languished. You think we developed vaccines just because it is fun?

Similarly, the very idea of a “public good” for AI technology is political.

Funding programs are absolutely political, too. I don’t know how you’ve convinced yourself otherwise. Show me any grant or funding campaign and I’ll show you the politics in it, if it is hard for you to notice.


----------



## Antiparticle (Jan 8, 2013)

Squirt said:


> What we saw during the covid pandemic is scientists realizing how political their research really was… scientists were not collaborating but were active participants and dependent on the political atmosphere… without the political incentives, vaccine research did not get nearly the attention, motivation or funding and potential breakthroughs languished. You think we developed vaccines just because it is fun?
> 
> Similarly, the very idea of a “public good” for AI technology is political.
> 
> Funding programs are absolutely political, too. I don’t know how you’ve convinced yourself otherwise. Show me any grant or funding campaign and I’ll show you the politics in it, if it is hard for you to notice.


Well “politics” = “society”, so depends on what you call political decisions. Science is not politics, it’s science. Funding could be politics, but still decided by scientists.

I can’t help to not notice how this thread is very political (even military oriented) for people with USA flags. I think many countries consider USA politics not that great, but have to accept it exists ( still not a compliment exactly), so I know many don’t share these viewpoints and values.


----------



## Antiparticle (Jan 8, 2013)

Ssenptni said:


> Lol - if you are working for Goog and getting paid in covidbucks, doing research is a political act.
> 
> What if you were at Goog and you built an AI, and the AI gave a result that said cov models and definitions are invalid. What would happen?
> 
> And if you built the same AI open source and got the same result and you tried to talk about it in a domain controlled by Goog, what would happen?


I think political act forced into AI research from my side would be something like this: I want to develop AI, but I will go to work in Zurich Google Brain, not to USA, because I support Switzerland politics. Otherwise it’s the same AI research, these people even collaborate on similar topics.

In 2021 there were 100 000 covid papers, 90% of them are forecast models, 85% of them are bad/wrong, and 99% open source or available after emailing the authors (1%).


----------



## Antiparticle (Jan 8, 2013)

Squirt said:


> What we saw during the covid pandemic is scientists realizing how political their research really was… scientists were not collaborating but were active participants and dependent on the political atmosphere… without the political incentives, vaccine research did not get nearly the attention, motivation or funding and potential breakthroughs languished. You think we developed vaccines just because it is fun?
> 
> Similarly, the very idea of a “public good” for AI technology is political.
> 
> Funding programs are absolutely political, too. I don’t know how you’ve convinced yourself otherwise. Show me any grant or funding campaign and I’ll show you the politics in it, if it is hard for you to notice.


If politicians didn’t want to listen to the scientists, sounds like politicians are the problem, science is neutral.

It was clear 20 years ago what happens with the virus spread, from the scientific point of view, so what went wrong is actually beyond me; if I want to fix a tooth I go to the dentist (politicans also go there), if I need a new shoe I go to the shoe store (politicians also go there), but what happens if politican needs an epidemics model? Then they suddenly know better? No, but in my country they thought they knew better what is better for the economy (it was also wrong, but still better than in most countries).


----------



## Antiparticle (Jan 8, 2013)

Squirt said:


> l You think we developed vaccines just because it is fun?
> 
> Similarly, the very idea of a “public good” for AI technology is political.


We developed them because it was epidemics (millions of people need it) and because they are cheap to develop (1$ per dose).

If you get glioblastoma brain tumor in the future, your statistics to get it are 1 per 100 000 people so not that much, this is not even a rare disease (it needs 3-5 per 10 000), so you get to chose between 2-3 chemotherapy drugs (they are expensive so I assume the government pays in USA, I think around 1000$), if they are a good match you are lucky, otherwise your chances to die are above 99%.

Pharmacies are not going to develop any expensive new drugs for this disease. This is also why cancer research is very generously supported in most universities, including drug therapy development.


----------



## Antiparticle (Jan 8, 2013)

Squirt said:


> Similarly, the very idea of a “public good” for AI technology is political.


My 4th reply on the same issue 😸: all politicians are votable (in democracies), so whenever exists a high level of unhappiness with them or with their decisions, it is a good idea to not vote for the same people again. 

Politicians in my country couldn’t apply the existing (10-20 year old ) scientific results, and in some cases didn’t even understand the law (it’s forbidden by most laws to spread infectious diseases). I could never vote again for the same people.

What should be any other purpose for science other than “social good”? Funny we even have this term, instead it should just be our basic logical reasoning, either it’s for short-term public good or long-term (needs some development). The only anti-social application I can currently think of would be war.


----------



## Rihanna (Nov 30, 2020)

when do we get transformers?


----------



## Antiparticle (Jan 8, 2013)

Rihanna said:


> when do we get transformers?


I know that for some people this is already “scary”: Boston Dynamics | Changing Your Idea of What Robots Can Do


----------



## Squirt (Jun 2, 2017)

Antiparticle said:


> Well “politics” = “society”, so depends on what you call political decisions. Science is not politics, it’s science. Funding could be politics, but still decided by scientists.





Antiparticle said:


> If politicians didn’t want to listen to the scientists, sounds like politicians are the problem, science is neutral.





Antiparticle said:


> What should be any other purpose for science other than “social good”? Funny we even have this term, instead it should just be our basic logical reasoning, either it’s for short-term public good or long-term (needs some development). The only anti-social application I can currently think of would be war.


This is where it would be critical for aspiring scientists to be more familiar with history and the humanities in their education - to learn how to recognize the role sciences play in society and get some perspective about it. For instance, the political situation around the world heavily influenced the development of quantum physics but is mostly glossed over and not considered by physicists, even if at the time many aspects of theory were informed by the political questions of the day.

The expectation that scientific study is free from bias (neutral) and just pops into existence on its own merit is simply false and seems to fuel a naive idealism that, through science, we can become objective creatures - and then the delusions compound. While scientists do study “the objective universe” which is indifferent to our desires, we ourselves are not indifferent to them, clearly!

“The computer programmer is a creator of universes for which he alone is the lawgiver. No playwright, no stage director, no emperor, however powerful, has ever exercised such absolute authority to arrange a stage or field of battle and to command such unswervingly dutiful actors or troops.”
― Joseph Weizenbaum*

The thing that makes it especially stand out is AI is essentially _delegating human decision-making to algorithms_. That in itself requires an even greater integration of sciences and humanities, and care taken with the politics it elucidates. 

It is important not to dismiss these facts with the _excuse _of “neutrality." AI has become more advanced and developed as a consequence of specific incentives and for specific purposes, under specific conditions. What are they? Answer that, and you'll quickly discover how it is far from neutral in any real sense (I have some clues, like the necessity to handle massive data sets). I cannot see some vague "social good" put into practice under these terms except as a political manipulation.

Being curious about the world, wanting to know how it works, is perhaps a purer form of scientific exploration... but the goal to manipulate what we find to some desirable end (well within the domain of politics) is the inevitable conclusion, and that reality shouldn't be brushed aside as not relevant to explore or question in science.

*Computer Power and Human Reason: From Judgment to Calculation by Joseph Weizenbaum | Goodreads 



Antiparticle said:


> We developed them because it was epidemics (millions of people need it) and because they are cheap to develop (1$ per dose).
> 
> If you get glioblastoma brain tumor in the future, your statistics to get it are 1 per 100 000 people so not that much, this is not even a rare disease (it needs 3-5 per 10 000), so you get to chose between 2-3 chemotherapy drugs (they are expensive so I assume the government pays in USA, I think around 1000$), if they are a good match you are lucky, otherwise your chances to die are above 99%.
> 
> Pharmacies are not going to develop any expensive new drugs for this disease. This is also why cancer research is very generously supported in most universities, including drug therapy development.


Vaccines are made cheap artificially. They are very resource intensive to develop, manufacture, and distribute, especially on a global scale as in the current pandemic. It then becomes a concerted effort to manage the resources to make it happen, requiring the cooperation of many organizations. Pharmaceutical companies have economic incentives to be grossly profitable, and so have huge lobbies to support their business around the world. Universities are also highly political and honestly that is why I couldn't stand "climbing the ladder" in them - who can bring in the most funding? What causes/projects will gain the most support? What credentials do you need to have research worth considering?

I don't want to take the thread too off-topic, but I don't see how these examples are not related to politics.


----------



## LeafStew (Oct 17, 2009)

Read today that 2 AI researchers from Deepmind (Google) won the Breakthrough prize:


> Demis Hassabis, DeepMind
> John Jumper, DeepMind
> 
> For developing a deep learning AI method that rapidly and accurately predicts the three-dimensional structure of proteins from their amino acid sequence.
> ...


----------



## Squirt (Jun 2, 2017)

LeafStew said:


> Read today that 2 AI researchers from Deepmind (Google) won the Breakthrough prize:


Awesome that we have millions of dollars thrown at scientists by the esteemed Google and Facebook founders and... transhumanists:


* *


















Not to be confused with these people:

* *


















I would be curious what these scientists' true opinions about these awards are... not that they could turn down that amount of cash.

Anyway, I attended a workshop about molecular visualization where we discussed AlphaFold2. It looks useful for predicting protein structure and is a starting off point for more complex projects. John Jumper seems to be the main brains behind it, having devoted his entire career thus far to it, while Hassabis is the business. Pharmaceutical companies stand to gain and are already putting it to work, and Hassabis has assured them he will build on it with models to better facilitate drug discovery, even creating a new company for it as a subsidiary of Alphabet: Isomorphic Labs | Home 

“You can think of it as a little bit like what DeepMind does with Google,” says Hassabis. “Our research goes into hundreds of Google products; almost every Google product you touch now has some DeepMind tech in it. You can think of Isomorphic Labs as our outlet for the real world beyond Google.” DeepMind's AlphaFold changed how researchers work | MIT Technology Review

Breakthrough indeed. The way these tech conglomerates are inserting themselves into medicine (and other sectors) under this umbrella of "science for humankind" is disconcerting, to say the least. "Octopus" doesn't even define it... these are more like amoebas. 

At least the Wiley Foundation is more ideologically sane:

The 20th Annual Wiley Prize in Biomedical Sciences Awarded for Protein Structure Predictions | John Wiley & Sons, Inc.


----------



## Antiparticle (Jan 8, 2013)

Squirt said:


> Awesome that we have millions of dollars thrown at scientists by the esteemed Google and Facebook founders and... transhumanists:
> 
> 
> * *
> ...


I know that everyone can write anything anywhere, but why are you writing in this thread if you are negative towards scientific discoveries, and the thread is aimed for something completely opposite (popularization of science and scientific discoveries)?

Why not focus on some topic that positively motivates you for your own creativity?


----------



## Necrofantasia (Feb 26, 2014)

> Why not focus on some topic that positively motivates you for your own creativity?


My guess is because it's very hard not to notice how AI could compound the issues that have cropped up since the inception of the internet.

There's fear, in other words. Hard to focus on positives when you've seen extensively how irresponsibly managed tech warps the lives of those using it (Facebook's effects on international civil unrest, videogames' use of addiction psychology, etc etc) because those deploying it failed to understand it wouldn't operate in a vacuum...or understood and didn't care. (_Zucc_...)

Operating on an international scale means fuckups are very very hard to roll back and often transform the course of history.

I guess that gives room for the question: what safeguards are being used to avert this kind of slippery slope? Are there any? Since the law operates at molasses pace and on a precedent-heavy basis it's kinda even more concerning to go into uncharted territory.


----------



## Antiparticle (Jan 8, 2013)

Necrofantasia said:


> There's fear, in other words. Hard to focus on positives when you've seen extensively how irresponsibly managed tech warps the lives of those using it (Facebook's effects on international civil unrest, videogames' use of addiction psychology, etc etc) because those deploying it failed to understand it wouldn't operate in a vacuum...or understood and didn't care. (_Zucc_...)


Fear is imaginary and caused by social media. Unplug for a month, you will notice the effects, even from news portals when unregulated, antivax influencers, conspiracy theories…. War news sound hyped and exciting, it’s social manipulation. On social media anyone can write anything anonymously, without any consequences, whereas published scientific research will be under your own name and reputation. (To compare motivation: would vaccine researchers really want to kill their own families, however anonymous internet trolls are revealing the real truth to random strangers online?)

There is no explanation why would millions of people started to believe something that’s not true, other than being constantly exposed to it. (Also part of scientific research: 


https://www.pnas.org/doi/10.1073/pnas.1803470115


).

Recently I heard some scientists calling this “post-truth” era, I have to agree.

Ethics in AI is important part of AI research, same as AI safety, robustness, transparency, interpretability, explainability… AI is part of legal research as well, so we all have the right to be “protected from being subject to automatic decisions only” (and many other regulations), so it’s definitely not the science or researching new knowledge that creates new problems, if the aim is to find new solutions.


----------



## Necrofantasia (Feb 26, 2014)

Antiparticle said:


> Fear is imaginary and caused by social media. Unplug for a month, you will notice the effects, even from news portals when unregulated, antivax influencers, conspiracy theories…. War news sound hyped and exciting, it’s social manipulation. On social media anyone can write anything anonymously, without any consequences, whereas published scientific research will be under your own name and reputation. (To compare motivation: would vaccine researchers really want to kill their own families, however anonymous internet trolls are revealing the real truth to random strangers online?)
> 
> There is no explanation why would millions of people started to believe something that’s not true, other than being constantly exposed to it. (Also part of scientific research: https://www.pnas.org/doi/10.1073/pnas.180347011)
> 
> ...


The fact my points are being conflated with antivax/antiscience reasoning makes me doubt my question was understood....

At least you got part of it, social manipulation is ubiquitous in our lives and a matter of concern when rolling out solutions that deal with information.

Next time just add "Positive Vibes Only" to your thread title.


----------



## NipNip (Apr 16, 2015)

Antiparticle said:


> What do you mean?


----------



## Squirt (Jun 2, 2017)

Antiparticle said:


> I definitely don’t understand this, so I have to agree.


Haha, well, you don’t need to. I’m also tired and stressed, and so less patient and coherent than I should be. Sorry about that.

Your point about constructive criticism is taken. 👍


----------



## mimesis (Apr 10, 2012)

Antiparticle said:


> There is no explanation why would millions of people started to believe something that’s not true, other than being constantly exposed to it. (Also part of scientific research: https://www.pnas.org/doi/10.1073/pnas.180347011)
> 
> Recently I heard some scientists calling this “post-truth” era, I have to agree.


Isn't constantly being exposed to it supposed to be a blessing, thanks to AI? To only be exposed to what we subjectively 👍 before?



> In short, the algorithm had gotten way more personal. The goal was to find the video each particular viewer wants to watch, not just the video that lots of other people have perhaps watched in the past. [ youtube ]


_Intersubjectivity_
_An intersubjective truth asserts a “fact” that a group of people agree implicitly to treat as axiomatic, and as though it were an objective truth. All moralities and collections of “common sense” are thus sets of intersubjective truths._

Your link doesn't work btw (404 page not found)


----------



## Antiparticle (Jan 8, 2013)

mimesis said:


> Isn't constantly being exposed to it supposed to be a blessing, thanks to AI? To only be exposed to what we subjectively 👍 before?
> 
> 
> 
> ...


Not sure I understand why would being exposed to wrong information be good, if it already demonstrated having bad effects on the society. Covid example: If 70-80% population, or at least key people in important places, knew in December 2019 how to calculate what R = 3-4 means (the probability that 1 person infects 3 or 4 people) there would be no covid. I can see how most things would be better in this scenario for everyone. Now we have the same problems as before/without covid, and additional ones, and the “misinformation cloud” on top of everything. And war of course. How is this better, instead of having perfect information spread?

To conclude: I would chose the “global” truth.

edit: fixed the link

Overall, I think I want to “agree” with this general negative sentiment about AI (received in this thread or in general). When in unknown, seems better to be negative/suspicious because it promotes critical thinking. Note that important discoveries are often published in broad impact journals so it is accessible/readable (to encourage further research).


----------



## Squirt (Jun 2, 2017)

Antiparticle said:


> What can be automated in a helpful way, should be automated, so that both keep critical thinking skills.
> 
> Examples: Hospitals - too many people, too much issues, too many patients per 1 doctor, if doctors cannot pay attention to your full longitudinal trajectory of symptoms, there is higher chance to misdiagnose you and make a mistake. Many medical problems don’t require IQ=200, and AI most probably will not be that smart anytime soon for real-world problems (but probably it can score that high on IQ tests).


Is AI actually the best solution for that problem, though? How does it automate in a helpful way where critical thinking skills are maintained? I'm going to quote Weizenbaum again:

"I think the computer has from the beginning been a fundamentally conservative force. It has made possible the saving of institutions pretty much as they were, which otherwise might have had to be changed. For example, banking. Superficially, it looks as if banking has been revolutionized by the computer. But only very superficially. Consider that, say 20, 25 years ago, the banks were faced with the fact that the population was growing at a very rapid rate, many more checks would be written than before, and so on. Their response was to bring in the computer. By the way, I helped design the first computer banking system in the United States, for the Bank of America 25 years ago.

"Now if it had not been for the computer, if the computer had not been invented, what would the banks have had to do? They might have had to decentralize, or they might have had to regionalize in some way. In other words, it might have been necessary to introduce a social invention, as opposed to the technical invention.

"What the coming of the computer did, "just in time," was to make it unnecessary to create social inventions, to change the system in any way. So in that sense, the computer has acted as fundamentally a conservative force, a force which kept power or even solidified power where is already existed."

Weizenbaum examines computers and society - The Tech (mit.edu)

I see a similar scenario playing out within broken medical systems - automating in this case is to cover up how mismanaged it is while continuing the status quo.

While AI can have useful applications, it is important to evaluate if it is truly the right tool for the scope of the problem. AI makes a lot of sense for enabling faster protein structure predictions from a large database, especially being a relatively tractable problem of chemistry. I don't think it makes much sense to replace doctors with AI because we've made it impossible for doctors to have adequate resources to do their jobs well in a system that hasn't been serving communities due to social and economic policies, not technical limitations. At least, not without being conscious that you are sidestepping the problem and not actually solving it.

I find Weizenbaum's views to be very realistic and well-considered. However, what he calls "social invention" would be much harder to implement than a technical one, and that should be acknowledged. Also, social invention still occurs when we must adapt to technology changes, but that adaptation is not part of the "plan" and isn't generally accounted for in the implementation of the technical solution. If you do plan for it, and it is part of the overall strategy of _improvement_, maybe it could work. Not sure exactly how that would look, though. 

Then I can see many scenarios where it could be abused, such as "more efficient" health information collection and analysis used by private insurance companies to set rates. Here is where it is also important to know whose problems you are solving and what that means for everyone else.

Continuing from the above interview about education, but which holds for AI within any other industry, imo:

"People come to MIT and to other places, people from all sorts of establishments -- the medical establishment, the legal establishment, the education establishment, and in effect they say, "You have there a very wonderful instrument which solves a lot of problems. Surely there must be problems in my establishment -- in this case, the educational establishment, for which your wonderful instrument is a solution. Please tell me for what problems your wonderful instrument is a solution.

"The questioning should start the other way -- it should perhaps start with the question of what education is supposed to accomplish in the first place. Then perhaps [one should] state some priorities -- it should accomplish this, it should do that, it should do the other thing. Then one might ask, in terms of what it's supposed to do, what are the priorities? What are the most urgent problems? And once one has identified the urgent problems, then one can perhaps say, "Here is a problem for which the computer seems to be well-suited." I think that's the way it has to begin."

To continue to AMA: Who do you find important or influential in your studies on AI? You said you like to provide links. What are your favorite general resources?


----------



## Antiparticle (Jan 8, 2013)

Squirt said:


> Is AI actually the best solution for that problem, though? How does it automate in a helpful way where critical thinking skills are maintained? I'm going to quote Weizenbaum again:
> 
> "I think the computer has from the beginning been a fundamentally conservative force. It has made possible the saving of institutions pretty much as they were, which otherwise might have had to be changed. For example, banking. Superficially, it looks as if banking has been revolutionized by the computer. But only very superficially. Consider that, say 20, 25 years ago, the banks were faced with the fact that the population was growing at a very rapid rate, many more checks would be written than before, and so on. Their response was to bring in the computer. By the way, I helped design the first computer banking system in the United States, for the Bank of America 25 years ago.
> 
> ...


In scanning medical images (radiology) it has perfect scores, computer vision is very well developed field already.

One interesting fact why: Large-scale image data, i.e. our random dog and cat images (billions 😂) that are uploaded to internet help to train neural networks recognize/classify what is a dog or a cat. The same neural network can be transferred/trained (with transfer learning) to classify medical images and recognize different classes of brain tumors.

Transfer learning is a promising idea for all fields with the lack of labeled data, as expert labeling is very expensive (much more difficult for doctors to label “tumor” object than a “dog” or “cat” in an image), so networks can be trained to learn in one domain and then transferred to other domains (not just medicine).

Similar thing happens in human brains, if children already know one foreign language (e.g. French), is it easier/faster for them to learn 2nd (e.g. Spanish)? 

Other example is classifying “non-interpretable” data by humans, EEG/ECG time-series are noisy and visually look similar (brain and heart signals), humans can’t visually classify it.


----------



## Antiparticle (Jan 8, 2013)

Squirt said:


> To continue to AMA: Who do you find important or influential in your studies on AI? You said you like to provide links. What are your favorite general resources?


I learn best from a mentor, I had a few of them (more important than others):

1.PhD mentor for math and theory, and to think like a real scientist (he was a “crazy physicist” type)
2. Postdoc mentor for supporting my new ideas and me as a person (also very important in science, especially after PhD), for giving me confidence to think independently
3. My colleague (a brilliant AI researcher) made a 10 year research plan of what is important in AI, so for now I feel confident in just following these steps in my own research. So, even if similar age, also a mentor.

I like to read new research papers by my own preferences/speed, this is the main resource for me (new published discoveries).


----------



## Squirt (Jun 2, 2017)

Antiparticle said:


> In scanning medical images (radiology) it has perfect scores, computer vision is very well developed field already.
> 
> One interesting fact why: Large-scale image data, i.e. our random dog and cat images (billions 😂) that are uploaded to internet help to train neural networks recognize/classify what is a dog or a cat. The same neural network can be transferred/trained (with transfer learning) to classify medical images and recognize different classes of brain tumors.
> 
> ...


Thanks for the specific application. You meant software used to interpret medical images? That would definitely be an improvement. What better than a computer to read a computer image? 

When I was in college, I attempted to roughly quantify the toxin levels of Aspergillus using photographs of the plates (aflatoxin causes a color change), which would be much less expensive/time consuming and less of a safety hazard than chemically testing for it. I figured a photograph might provide more precision than the naked eye because I could analyze the color using the RGB color model (if I could ensure the photographs were taken under the same conditions with the same camera/settings). It was pretty simple, but I never got to finish the study to test whether it worked.


----------



## Antiparticle (Jan 8, 2013)

Squirt said:


> Thanks for the specific application. You meant software used to interpret medical images? That would definitely be an improvement. What better than a computer to read a computer image?


I mean AI “model”, e.g. a neural network, this is different (algorithm) compared to a software, but eventually it can be implemented.


----------



## Ssenptni (Mar 26, 2021)

Antiparticle said:


> I am familiar with most covid models (and worked on some), but I don’t consider them AI, it’s just stochastic modeling.
> 
> What is your concern regarding epidemiological models? The accuracy of the forecasts?


The models assumed NPIs work, assumed prior immunity at large = 0, etc etc. Many assumptions known at the time to be false, and are still false.

I think AI was not used to develop the models or to learn whether the models were valid, rather it was used to learn about the propagation of information in disagreement with models/official info, and to silence that. In other words AI, which is supposed to be about learning, was used to prevent learning.

In one sense forecasts are irrelevant because of the inherent invalidity of the models. But how did the forecasts perform?


----------



## mimesis (Apr 10, 2012)

Antiparticle said:


> Not sure I understand why would being exposed to wrong information be good, if it already demonstrated having bad effects on the society. Covid example: If 70-80% population, or at least key people in important places, knew in December 2019 how to calculate what R = 3-4 means (the probability that 1 person infects 3 or 4 people) there would be no covid. I can see how most things would be better in this scenario for everyone. Now we have the same problems as before/without covid, and additional ones, and the “misinformation cloud” on top of everything. And war of course. How is this better, instead of having perfect information spread?
> 
> To conclude: I would chose the “global” truth.
> 
> ...


"Blessing" was meant ironically. Usually personalization algorithm, or machine learning personalization are believed to be a blessing, because it saves our brains processing time by tailoring information to our personal taste, (projecting our history). 

But the same way it could tailor disinformation. Rather than global old-school 'top-down' indoctrination, the information is selected for each individual based on personal profile/history. My neighbour may be spammed with conspiracy/antivax stuff, while I get none of that. I need to go here to get up-to-date . 

And indeed the more often you get exposed, the more plausible it may become. 

_Flooding the Zone: How Exposure to Implausible Statements Shapes Subsequent Belief Judgments_








Flooding the Zone: How Exposure to Implausible Statements Shapes Subsequent Belief Judgments


Abstract. Much scholarly attention has been paid to the effects of misinformation on beliefs and attitudes, but rarely have studies investigated potential downs




academic.oup.com















A Facebook whistleblower said it knows that its algorithms are pushing QAnon and white nationalist content to Trump fans, but denies it


Frances Haugen said in an interview with CBS that Facebook was reluctant to do anything that would lower engagement, and cost it money.




www.businessinsider.com













Far-Right Misinformation Is Thriving On Facebook. A New Study Shows Just How Much


Research from New York University found that far-right accounts known for spreading misinformation drive engagement at higher rates than other news sources.




www.npr.org






Btw as with every technological revolution,

_"Bronze would transform human societies by producing larger surpluses of agriculture and allowing for the creation of superior weapons."_


----------



## Antiparticle (Jan 8, 2013)

Ssenptni said:


> The models assumed NPIs work, assumed prior immunity at large = 0, etc etc. Many assumptions known at the time to be false, and are still false.
> 
> I think AI was not used to develop the models or to learn whether the models were valid, rather it was used to learn about the propagation of information in disagreement with models/official info, and to silence that. In other words AI, which is supposed to be about learning, was used to prevent learning.
> 
> In one sense forecasts are irrelevant because of the inherent invalidity of the models. But how did the forecasts perform?


It’s not true (what you wrote). Basically there are 3 classes of models: early in epidemic exponential or logistic, later compartmental models are used. These later ones are very sensitive to the small changes of parameters, but otherwise very realistic. Sensitivity means if parameters such as transmissibility or lethality of the virus are not perfectly known, the bulk values (such as total number of deaths or cases) can vary greatly over different time horizons. This is why these models are good for short-term forecasts (1-2 weeks). However very unrefined models are more than enough whenever new virus emerges, this is to give us a general estimate what happens (and when). These are useful for policy making (for politicians).


----------



## Ssenptni (Mar 26, 2021)

Antiparticle said:


> It’s not true (what you wrote). Basically there are 3 classes of models: early in epidemic exponential or logistic, later compartmental models are used. These later ones are very sensitive to the small changes of parameters, but otherwise very realistic. Sensitivity means if parameters such as transmissibility or lethality of the virus are not perfectly known, the bulk values (such as total number of deaths or cases) can vary greatly over different time horizons. This is why these models are good for short-term forecasts (1-2 weeks). However very unrefined models are more than enough whenever new virus emerges, this is to give us a general estimate what happens (and when). These are useful for policy making (for politicians).


I was talking about early models and yes it is true that exponential spread, no prior immunity, NPIs are effective - these were all assumptions. The models associate death counts (which, the methods for counting deaths are also invalid) with "social distancing" behavior, based on phone location data.
The early models were invalid, so later models are irrelevant because there is no reason for them to exist.
Using these models for policymaking is called "malfeasance."

But I agree, I would not call the models AI.


----------



## Antiparticle (Jan 8, 2013)

Ssenptni said:


> I was talking about early models and yes it is true that exponential spread, no prior immunity, NPIs are effective - these were all assumptions. The models associate death counts (which, the methods for counting deaths are also invalid) with "social distancing" behavior, based on phone location data.
> The early models were invalid, so later models are irrelevant because there is no reason for them to exist.
> Using these models for policymaking is called "malfeasance."
> 
> But I agree, I would not call the models AI.


I don’t understand this so not sure how to comment.


----------



## Ssenptni (Mar 26, 2021)

Antiparticle said:


> I don’t understand this so not sure how to comment.


I actually came on here wanting to talk about music AI 🤷‍♂️


----------



## Antiparticle (Jan 8, 2013)

Ssenptni said:


> I actually came on here wanting to talk about music AI 🤷‍♂️


What about music & AI?


----------



## Ssenptni (Mar 26, 2021)

Antiparticle said:


> What about music & AI?


For one thing, there is an issue called "temperament."
In one key, notes have a certain profile of overtones. This profile does not match other keys.
To accommodate being able to play in every key, every note is tuned a little bit off.
It may be possible to use AI to adjust tuning to the true temperament for the key at that moment, in real time.
Maybe that would sound good, who knows.


----------



## Antiparticle (Jan 8, 2013)

Ssenptni said:


> For one thing, there is an issue called "temperament."
> In one key, notes have a certain profile of overtones. This profile does not match other keys.
> To accommodate being able to play in every key, every note is tuned a little bit off.
> It may be possible to use AI to adjust tuning to the true temperament for the key at that moment, in real time.
> Maybe that would sound good, who knows.


Do you mean for tuning the piano?

DeepMind’s WaveNet composes music:WaveNet: A generative model for raw audio


----------



## Antiparticle (Jan 8, 2013)

Something for AI ethics, AI nanny and AI cold wars:









Joanna J Bryson


Professor of Ethics and Technology, Hertie School of Governance - Cited by 7,199 - intelligence - behavioral ecology - systems AI - AI ethics - technology policy




scholar.google.com


----------



## Ssenptni (Mar 26, 2021)

Antiparticle said:


> Do you mean for tuning the piano?
> 
> DeepMind’s WaveNet composes music:WaveNet: A generative model for raw audio


Yes for tuning. As it is now nothing is ever truly in tune. 
To get it truly in tune, every time anything changes you would have to adjust every note, like with a tensor.


----------

