February 23

Tech #5

In tech Thursday before spring break we were learning about electric cars. Their lasers and sensors are able to make a 3D model of where they are, but Thea one thing they really lack is ethical decisions. By using if statements, you can’t make a good ethical decision. You need real AI for smart self-driving cars. People think that we have to change how AI thinks, and we can do this by having a better understanding of the brain.

February 23

Tech #4

Also on the Friday before break we were experimenting with circuits. It was interesting how electrons move through wires faster slower due to the conductivity of the insulation in the wire. This conductivity is important because if you don’t have it, the electrons move so fast that your battery will explode. This was an interesting fact, and made me relate it to how cars use 80% of their engine to accelerate past the friction, or in this case, “conductivity” of the road. We would need that to be able to break. Thier’s a lot of things that this principle connects too, such as speed of wheels, racing, the air, etc.

February 23

Tech #3

in computer tech on Friday before break, we experimented with the power of brainstorming and executing strategies. Brainstorming interested me bemused their was no judgments on weather it was bad or good. You got to express yourself and pour emotion into designing, which I appreciated being able to do. No guidelines was great.

February 22

A brief intro to Quantum computing

Quantum computing makes me smile, to know that the dna and efficiency of computers is evolving. We don’t need to have supercomputers. Just 4 small quantum computers can power the electricity of the whole world, and that includes the area of the world, with the ocean. Quantum computers will hopefully unveil the true nature of our universe, reality, traveling back and forward in time, discovering if their is a god, the nature of the Big Bang, best ways to finance, and much more, questions that by principle are possible to do, but the calculations are beyond any humans capability. Here is a brief description of how they work.

In the classical model of a computer, the most fundamental building block, the bit, can only exist in one of two distinct states, a 0 or a 1. In a quantum computer the rules are changed. Not only can a ‘quantum bit’, usually referred to as a ‘qubit’, exist in the classical 0 and 1 states, it can also be in a coherent superposition of both. When a qubit is in this state it can be thought of as existing in two universes, as a 0 in one universe and as a 1 in the other. An operation on such a qubit effectively acts on both values at the same time. The significant point being that by performing the single operation on the qubit, we have performed the operation on two different values. Likewise, a two-qubit system would perform the operation on 4 values, and a three-qubit system on eight. Increasing the number of qubits therefore exponentially increases the ‘quantum parallelism’ we can obtain with the system. With the correct type of algorithm it is possible to use this parallelism to solve certain problems in a fraction of the time taken by a classical computer. This means that humanity can be able to have access something completely new, prove the reality in 11 instead of 4 (space-time) dimensions to our universe. 11D, pretty crazy. Quantum mechanics or quantum physics allows us to do this, the most advanced math and science their is.

 

February 1

Breakout edu

2 days ago in computer tech, we were trying to break 3 locks that had a key inside it. It was an educational challenge that was supposed to envoke creativity and commmication between the students. We had to look for 3 clues scattered around the room. We had to work together to “break out”. It was a good experience and it challenged our creativity. It was a good experience that was a good way to start kids bonds in the new quarter.

June 19

Blog post 7-Reflecting on presenting in front of parents

My experience when presenting for the parents was a good experience for me. I felt comfortable when I presented.

 

I felt comfortable when I was presenting because I decided to not follow my script, and I engaged the audience more. I first asked if people know what AI is, and a lot of people didn’t know, so I decided to create more of a basic structure so that people can understand my project. If a lot of people knew what AI was, I would go into more of the techs parts, without explaining it in a basic manner. I asked the audience more Heston’s which made them feel included, and made me feel more comfortable. The structure of my presentation pleased me, and made me feel natural when I was presenting.

 

Overall, I was very happy with how I presented, and I look forward to these types of knowledgeable presentations in the future. Here is my performance of my capstone project:

June 15

Blog post 6-working on my final project

My experience well working on my final project is hard to explain. I didn’t feel that pressurized, but when I looked at the sheet of when it was due, I did. I wanted to memorize it so well that I would 100 percent not fail, but later I developed a new strategy that I would have a general idea of what I was going to speak about on the slide, and because I personally went through the experience of research, and I am very informed about what I want to talk about, so I can improvise a little bit. This strategy made me feel every confident about my success rate, and made me focus more.

 

I also felt motivated by the means of how the project commenced, and how i get to share my knowledge with other people. I feel like this is my chance to share what I know, and be me, without so many guidelines. I love talking and presenting, so being able to geek out about technology, with a lot of speaking and in a presenting format, was a great experience for me. I feel very motivated and I love how the teachers conducted this project.

 

This is all I experienced. I developed a strategy that really helped me, and I am very excited to share my knowledge with other people.

June 6

Main Inquiry Essay

 

How Has Artificial Intelligence (A.I.)  

Revolutionized Healthcare, and

How Will A.I. Continue to Improve Healthcare?

 

Simply put, artificial intelligence (A.I.) is a machine learning system that is able to adapt and learn all by itself. AI is incredible. Now, because AI is such an advanced platform, there are so many uses. One of the ways that AI is used is for healthcare is image processing technology, which has improved diagnosis in healthcare. Another way AI has revolutionized healthcare is through an app called Sugar IQ, which helped patients with Diabetes through deep learning. But finding ways that AI can help healthcare are ongoing. A way that AI is contributing to the future is by that diagnosis is becoming more efficient, which means we can diagnose more problems, diseases, and people.

 

How AI uses Image Processing Technology, to improve Healthcare

 

One way AI is revolutionizing healthcare is through image processing technology. Image processing technology is one of the things ran by a system called a neural network. Image processing technology has improved healthcare by having research data that is critical to diagnosing patients. Here are several examples of how image processing technology is being used to diagnose medical symptoms:

 

  • Neural Network Review of X-Rays: After a patient has an x-ray taken, the film is scanned through a neural network machine, and the neural network software assesses the results of the X-Ray.
  • Facial recognition software: Facial recognition software is being used to diagnose harder-to-find symptoms. For example, diagnosing genetic disorders. It uses a machine learning algorithm to complete this, which means in other words that the
  • Chatbots: Companies are using AI-chatbots with speech recognition to identify patterns in patient symptoms. This helps form a potential diagnosis.

 

How AI, Through Sugar IQ, Has Improved Healthcare:

Sugar IQ:

Sugar IQ is an application that is designed to keep people’s glucose sugar levels level. (This app is just for diabetic patients) The app also helps you figure out the snacks that are challenging for your blue one levels based on hardware technology called the Guardian CGM Connect. The Guardian CGM connect is a tiny device that you tape to yourself, and every 5 minutes it inducts such a small needle into your body, that you can’t even feel it. The device then uses data that you insert about what you have eaten, then, the AI kicks in. The AI processes what you have eaten, and learns about how your body developed overtime. Then, the AI software recommends what food for you to eat, to the point to keeping your sugar glucose levels at a normal rate. Sugar IQ is revolutionizing people with diabetes through deep learning.

 

How AI has diagnosed problems in Healthcare:

Watson is IBM’s machine learning platform for AI. Watson’s first project was trying to use AI to detect cancer. For Watson to do this for the first time, it took 3 years. For the second type of cancer, it took 1 year. A big improvement. The third type which is where Watson is right now, took 4 months. Overtime, Watson adapted and got faster at detecting cancer, because it developed new ways to identify, and diagnose cancer.

How Will AI Impact healthcare In the Future:

Diagnosis is the future of AI. It is developing a fast rate. I mean so fast. Diagnosis is a big part of healthcare, and now since healthcare has many more problems, the AI is discovering how to diagnose more diseases and problems, based on the fact that there are more problems that are developing.

Conclusion:

These two reasons show how AI has revolutionized healthcare, and the example of how we can improve AI in the future shows how we can contribute to a brighter future. AI has made such an impact on the world in general, and in healthcare. Who knows, maybe one day your life can change due to AI.

May 31

Blog Post #4-Capstone Interview

I really loved my interview with Pooja Kumar. Pooja is an investor for clients in Healthcare, and she knows a lot in the industry of AI in Healthcare. She was very helpful with centering where I should drive my research, and giving more detailed examples of how AI is improving Healthcare, and how that can improve, and how we can centralize AI in Healthcare with humanity. Here are some examples of what Pooja thought about how AI is revolutionizing Healthcare, and where and how we should adapt to improve the technology for the future:

 

  • AI is starting to revolutionize healthcare, it hasn’t revolutionized Healthcare yet. AI will most likely develop statistically over the next couple decades. The context is that she works with doctors and investments for Healthcare, some are AI, so in the field of statistics is where she is an expert on.
    • Examples of ways that AI will advance and revolutionize healthcare in the future:
      • Hospital Systems, doctor advancements and analysis, and Institutions of AI research is where AI in Healthcare statistically is predicted to evolve.
      • Can technology replace the less clinical tasks that people can do
      • Technicians are wondering if AI can read radiology waves, and transfer them and analyze them to create an algorithm or a suggestion using core ML and Deep Learning platforms.
  • Generally in 10 or 20 years, AI will be at a level that is incomparable to humans
  • AI hasn’t had a major impact on patients, it is more at the level of watching over things that are in place
  • Natural Language processing is good to pull information like clinical notes for AI Finance in Healthcare. Generally the Finance, numbers, and Natural Language processing are the main components and more advanced aspects of AI as of now, in Healthcare.
  • Diagnostics of AI:
    • AI in Diagnostics is going to speed up in the next few years, because their is more and more data that is going to become a major input into AI platforms and they can learn, we can learn, and look for platforms like Processing of Language and Analytics of Medicine, etc., that will allow us to see, and teach these platforms to perform tasks that humans do. X-Rays are a good example of diagnostics.Here was the first historical breakthrough: 20 to 30 years ago, stays were on films,with bad memory drives, and you had to get them processed and printed out to view them. The pixels were also very bad quality, which gives the painter it a negative start to their diagnostics. And comparing it to now and our time, we have advanced technologies that perform these amazing tasks,and AI will get up to that even, and center all of the other 3rd party hardware and software establishments that we use currently to gather the information, but the problem is that they are all on separate platforms. If these components were part of one of the main platforms of AI, we can futuristically have a great way of diagnostic get problems to AI in Healthcare towards the future.
    • Neural Networks drive the diagnosing, and will improve in the future and be able to process more RAM (Random Access Memory) data in the future as our technologies improve.
  • How will AI have an impact on patients in the future?
  • Sun Microsystems AI predicted is going to replace most doctors in the that is the way that the future is going to continue in their future, but Pooja doesn’t think that tat will happen. Machines she thinks should be a companion to humans in the future, and we should collaborate our minds and not act like it is slavery. This should be a beneficial aspect of AI collaborating with humans.

 

I found all of these examples very interesting and informative, and I was able to better base my research. Here are some ideas that I got from Pooja about where to futuristically conduct my research:

  • Hardware technology powering Neural Networks
  • How we can make Hardware smaller and more effective
  • Platforms for AI pros and cons
  • Natural Language processing Neural Networks and how they contribute to AI in Healthcare

 

Over all this interview had me realize the importance of us wrong with these technologies because of the massive amount of data that these machines can discover out of the ordinary thing that we can’t.

May 30

Blog post 3-Site Visit (Capstone)

Generally speaking, I really loved my capstone site visit. It really was a great experience and it inspired me to be a part of AI development in the future. I met two people that were named Matt Calgary Kathy McGrody both very inspirational, and showed me how AI can contribute and make Healthcare data more accurate, and better. Humans have been working with this technology for years, and having a kid like me visit IBM, and see Watson and AI software, well, that is pretty big.

    I had a great time their. Unfortunately there was no photo of video content recorded, because I was not allowed to. I will break up the next two paragraphs into each person that I met, and what I learned from each of them.

First, i met someone named Kathy McGrody. I learned about the basic Architecture, and how the place that i am visiting is so special to IBM, and their success. What she said is that the Architect that designed the building, his name was Wallace Eckert. He was modern by the means of how it was built in 1964, and the long glass panels represent the true beauty of the building.

Here are some pics:

Anyways, that’s not the point. I mainly learned from Kathy that how AI has contributed to Healthcare is really big because of how the insights, and the data that Watson is feeder is so accurate, that Watson comes up with a great solution, that has came in handy many times, and still continues to. Here are the full notes that I have: Link

 

What I learned from Matt Calgary is the development, and actual Software of the new app SugarIQ for iOS devices. It is not yet out on the App Store as of the date 5/29/18, but it will come out in the next month or two. SugarIQ has a built in Watson software to it, and it is for people with diabetes. Their is a wireless device called Guardian Connect. Guardian connect is a tiny device that injects a needle into your skin every 5 minutes. Don’t worry, you don’t even feel it. Anyways, then the data that it collects about your blood pressure, sugar, etc., and then Sugar IQ makes references and can see what you ear, and it asks questions a to know you, and using deep learning platforms of AI, is able to get to know you better and make references to you better. For example, if the app recognizes that you play soccer every Saturday afternoon, it will remind you to pack a snack with a lot of calories such as a granola bar, so your blood sugar goes too low. Also, Sugar IQ will notify you on your iOS compatible device if your blood pressure or sugar is too high, or low. I found this pretty cool. Here are a couple of videos on Sugar IQ:

After that, I really enjoyed the site visit. It was fun, I got to see some really cool parts of AI in Healthcare, I learned a lot, and most of all I was inspired to be part of AI in Healthcare In the future, and to make the world better where it needs to be improved on.