futurefridays Archives - Robbie Agustin https://robbieagustin.com/tag/futurefridays/ Business Optimization Expert Mon, 09 Mar 2020 20:40:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://i0.wp.com/robbieagustin.com/wp-content/uploads/2018/08/cropped-Logopit_1535353131558.jpg?fit=32%2C32&ssl=1 futurefridays Archives - Robbie Agustin https://robbieagustin.com/tag/futurefridays/ 32 32 150724815 Virtual Reality and its Impact to the Future of Businesses https://robbieagustin.com/virtual-reality-and-its-impact-to-the-future-of-businesses/ Mon, 09 Mar 2020 20:40:00 +0000 https://robbieagustin.com/?p=2213 For today’s edition of #futurefridays let’s talk about Virtual Reality, and how it will change the Future of how Businesses operate today. Have you ever been in a situation where you were trying to choose a resort to go to for your vacation, and you made your decision based on the pictures you saw? What …

The post Virtual Reality and its Impact to the Future of Businesses appeared first on Robbie Agustin.

]]>

For today’s edition of #futurefridays let’s talk about Virtual Reality, and how it will change the Future of how Businesses operate today.

Have you ever been in a situation where you were trying to choose a resort to go to for your vacation, and you made your decision based on the pictures you saw?

What if there was a way to “try before you buy” by way of virtually immersing you into the resort, so you can experience what it’s like being there first-hand. Would that make your decision-making faster?

That’s what Virtual Reality can do for you. And it has many more use cases other than that.

Join me as we explore what Virtual Reality is and how it can be helpful in changing the way we do things within and around your business.

Let’s start with a video that defines what Immersive Technology like Virtual Reality is, and some examples of how it applies in the world today.

Video © TEDx Talks

I hope you learned a lot from this video.

Now let’s talk about some use cases where Virtual Reality can apply in businesses today.

Travel Industry

This is pretty much the example I gave you at the beginning of this article.

It’s how you can virtually be immersed into the location you want to visit without actually going there. That way, you’ll get a glimpse of what it’s like, which will help you make a better decision.

Gaming and Entertainment Industry

Well, this is pretty obvious, as this is really where VR has been adapted a lot. And people will continue to take things to the next level in this space.

Aviation and Drones

Particularly unmanned flight. The Air Force does this all the time, where they fly unmanned aircraft and drones remotely.

Even drone companies like DJI have come up with a way for you to pilot their drones while wearing a VR headset, giving you a unique experience as if you were flying like a bird in the sky.

Improving Safety for Hazardous Jobs

There are several use cases for this. For example, imagine being a crane operator at a shipping dock. How many storeys do you have to climb up and down just to get to and from your actual “work desk,” and what if you had to do that several times a day?

What if you can instead control the crane from the ground, safe inside an office cubicle, through Virtual Reality. How much safer and easier do you think your job will be?

Training

The way by which you train employees can be taken to the next level with the help of Virtual Reality, especially if it involves having to travel to certain locations just to learn a skill.

For example, if you need to train employees at a call center in the Philippines what it’s like being inside the main store of your business in London, so that when they talk to customers about it, they can relate to them as if they have already been there, then it would make better sense to let them experience it in Virtual Reality, rather than pay for their travel and accommodation just for them to experience it in real life.

Vehicle and Property Sales

For anything short of smelling the leather, you can probably use Virtual Reality.

For example, if you wanted to buy a new car, but wanted to see what the various color options of the Alcantara leather would look best, why not experience it via Virtual Reality?

Looking to buy a house, but couldn’t make a decision without seeing what it would look like fully furnished with all the beds and furniture? Well what if you could see it in Virtual Reality?

Ending Note

The use cases really span far and wide. I have given you just some of the examples.

Nevertheless, I hope you found this to be of value, and gave you a sense of what Virtual Reality is and where it can be utilized in real-world scenarios.

If you have any questions, reply with a comment. I’d love to hear from you.

And if you think this is helpful and you’d want to get updates on the next article, subscribe for updates, and get a free copy of my book – The Business Optimization Blueprint.

The post Virtual Reality and its Impact to the Future of Businesses appeared first on Robbie Agustin.

]]>
2213
Artificial Intelligence Basics: What is Machine Learning? https://robbieagustin.com/artificial-intelligence-basics-what-is-machine-learning/ Mon, 17 Feb 2020 04:42:43 +0000 https://robbieagustin.com/?p=2132 It’s #futurefridays and today we’ll be talking about the basics of Artificial Intelligence. Previously, I talked about Natural Language Processing, and before that, Computer Vision. But today we’ll tackle another field of Artificial Intelligence called Machine Learning. Wikipedia defines Machine Learning as: Machine learning (ML) is the scientific study of algorithms and statistical models that …

The post Artificial Intelligence Basics: What is Machine Learning? appeared first on Robbie Agustin.

]]>

It’s #futurefridays and today we’ll be talking about the basics of Artificial Intelligence.

Previously, I talked about Natural Language Processing, and before that, Computer Vision.

But today we’ll tackle another field of Artificial Intelligence called Machine Learning.

Wikipedia defines Machine Learning as:

Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as “training data”, in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task.

Let me show you some examples of how Machine Learning is being used in real-world scenarios.

But first, I would recommend you watch this video from Accenture that gives a really simple explanation of what Machine Learning is, in 1 just minute.

AI 101: What is Machine Learning?

Video © Accenture

Now that you have a basic understanding of what Machine Learning is, allow me to give some examples of it in action today.

Waze

Have you ever wondered how Waze is able to tell if traffic is heavy heading towards your destination, and it is able to forecast arrival times and recommended routes?

Well, that’s Machine Learning in action. It learns the patterns based on how much time users are spending on each road on the map and at specific times of the day, week, month, etc.

Chat Bots and Virtual Assistants

I’ve talked about this in another post talking about Natural Language Processing as this is actually a combination of Machine Language and NLP.

Check out the detailed article here: Chat Bots Making Phone Calls on Your Behalf using NLP AI (Use Case).

Email Spam Filtering

Did you know that your email software uses Machine Learning to determine which messages to filter which messages are Spam compared to those which are not?

And when you find something in the Spam folder that isn’t supposed to be there, when you flag it as “Not Spam” you’re essentially teaching the Machine Learning algorithm what to do next time.

Again this technology is a combination of Machine Learning and Natural Language Processing working in tandem with each other.

Face Recognition like in Social Media

Have you ever wondered how Facebook is able to auto-detect who the people are in a photo that you upload even if you haven’t tagged them yet?

Here’s an example of how Facebook uses this technology.

This is actually Machine Learning working in tandem with Computer Vision.

This can also be used to identify emotions in faces, like Anger, Sadness, Happiness, and so forth.

Object and Character Recognition using Computer Vision

This is pretty similar to face recognition but can also be utilized for other objects, like telling apart humans from animals, or telling apart cats from dogs, or squares from circles.

This can also be used to identify characters like numbers and letters. And when combined with Natural Language Processing, can be used to identify context and meaning.

Here’s an example of a Lego Brick Sorter using Computer Vision and Machine Learning.

Forecasting and Prediction Models like Product Recommendations

Have you ever wondered how social media sites are able to post ads that are relevant to stuff that you have been researching about?

That’s machine learning in action, where the algorithm is able to recognize your buying intent based on previous purchases and recent searches.

Machine Learning can also be used to identify patterns that can be used for forecasting, such as when NASCAR race cars should pit for fuel and tires, and when crops will grow in certain areas, given the weather and other factors.

Here’s an example of using AI to predict Heart Attacks before it happens.

Ending Note

I hope you found this to be of value, and gave you a sense of what Machine Learning is and where it is being utilized in real-world scenarios.

If you have any questions, reply with a comment. I’d love to hear from you.

And if you think this is helpful and you’d want to get updates on the next article, subscribe for updates, and get a free copy of my book – The Business Optimization Blueprint.

The post Artificial Intelligence Basics: What is Machine Learning? appeared first on Robbie Agustin.

]]>
2132
What will happen to RPA (Robotic Process Automation) in 2020? https://robbieagustin.com/what-will-happen-to-rpa-robotic-process-automation-in-2020/ https://robbieagustin.com/what-will-happen-to-rpa-robotic-process-automation-in-2020/#respond Mon, 03 Feb 2020 11:52:45 +0000 https://robbieagustin.com/?p=1768 For #futurefridays let’s talk about RPA (Robotic Process Automation) and what will happen to it in 2020. With all the rage about AI (Artificial Intelligence) and the Future of Work, people have been saying that RPA seems to have taken a back seat from all the action. To the uninitiated, I’ll talk about the difference …

The post What will happen to RPA (Robotic Process Automation) in 2020? appeared first on Robbie Agustin.

]]>

For #futurefridays let’s talk about RPA (Robotic Process Automation) and what will happen to it in 2020.

With all the rage about AI (Artificial Intelligence) and the Future of Work, people have been saying that RPA seems to have taken a back seat from all the action.

To the uninitiated, I’ll talk about the difference between RPA and AI.

So back to what will happen to RPA, in the recent past, people have been so afraid about RPA and that people would lose jobs because of it.

That was only for RPA. What more for AI? I guess people are, and were, just afraid of what they don’t know.

But now that people have become accustomed to having a “Digital Workforce” or software robots working alongside them, people’s thoughts have changed from fearing about job security, to scrutinizing where RPA falls short!

In my opinion, that’s a good thing, because we can now start doing innovations!

Here are some of the things I’ve heard people say about RPA lately:

  • It’s becoming a band aid solution
  • It can only do so much
  • With AI coming in, let’s just set our sights on that

With that, I’d like to talk about each of those bullet points in detail.

I will share my thoughts based on my own experience, as well as where I think RPA will be going to next.

RPA is becoming a Band Aid Solution

Based on my experience, my short answer is Yes, and No. Let me explain.

RPA is considered a band aid solution because ideally the long term solution is for you to fix the systems and applications that RPA interacts with.

That is true IF the development of those tools and applications is within your control, meaning you have access to the source code, you have tech developers available, and you can customize it to suit your needs.

Usually if it’s a third-party application, if it’s not open source, there’s Intellectual Property involved so they will not give access to the source code for you to make changes to it.

In such scenarios, they can provide an API for you connect other systems and applications to their proprietary software.

If you can’t customize the source code, and there is no API available, you have no choice but to go through the User Interface, which is how human users do it.

And if everything you do on the User Interface is rules-based, with no human judgment required, that’s where RPA will shine, especially if the process involves touching more than one system or application for you to complete each task.

If you really want to do away with having BOTs, you might want to look at replacing your existing systems and applications with something that will suit your needs (and costs) better than having several applications and BOTs running at the same time.

RPA can only do so much

Ah, this is where I would have to agree, to a certain extent.

RPA, in its simplest form, is like an independent human user. Meaning it has its own computer (a server) and it can work independently on its own. So to be efficient at it, BOTs have typically been designed to operate in a “batch processing” fashion. Which means it cannot cater to the individual needs of every other human in your workforce.

So in 2020, it’s about time for people to embrace the innovations being done in RPA. And it’s not just AI.

Enter “Farm BOTs” and “Side BOTs.”

These things aren’t necessarily new innovations, but I don’t think companies have been taking advantage of them as much.

Farm BOTs: You know when there are seasonalities in your business, like volumes go up during Christmas season, or during summer, and you tend to hire temporary employees to help you manage the volume?

That’s the same concept behind Farm BOTs, but instead of for humans, it’s for your digital workforce.

That way, you save on costs, as you don’t need to have idle servers sitting around when volumes are low.

Side BOTs: You know how Iron Man talks to Jarvis and asks him to do certain computations to help Iron Man decide action to take do next?

That’s how Side BOTs operate, only they aren’t AI. They are still Robotic Process Automations that work on simple rules-based and calculations, but instead of sitting on a server, where you have to wait for the “batch processing,” they sit in your desktop, and you can call on them real time.

That way, you can simultaneously do rules-based tasks via the BOT, whole you do some thinking and/or work on human judgment based tasks.

With AI coming in, let’s just set our sights on that

If you’ve read this far, I guess you pretty much have an idea how RPA works, and how it will be improved further.

If you want my personal thoughts on this, I believe that RPA will not die. If anything, RPA and AI will actually compliment each other,where RPA will be foundational in automating routinary rules-based tasks, while AI will be used to manage some of the more complex work that are still being done by humans today, or even help humans achieve greater things that haven’t been done before.

Ending note

This post has been inspired by this article from Forbes.

If you want to read other articles I’ve shared about Automation, RPA, and AI, click here.

If you want to learn more about how to find automation opportunities, download a copy of my book for FREE (for a limited time only) – The Business Optimization Blueprint, and learn how you can improve your business process and take it to the next level.

The post What will happen to RPA (Robotic Process Automation) in 2020? appeared first on Robbie Agustin.

]]>
https://robbieagustin.com/what-will-happen-to-rpa-robotic-process-automation-in-2020/feed/ 0 1768
Does the Future of Work need Lean Six Sigma? https://robbieagustin.com/does-the-future-of-work-need-lean-six-sigma/ Mon, 27 Jan 2020 20:51:44 +0000 https://robbieagustin.com/?p=1909 For #futurefridays I was asked a question: “Does the Future of Work still need Lean Six Sigma?” Well, based on my own personal experience, it’s a big YES. You see, Lean Six Sigma is simply a structured problem-solving approach. So if you have a problem that requires root-cause-analysis before you find the appropriate solution, then …

The post Does the Future of Work need Lean Six Sigma? appeared first on Robbie Agustin.

]]>

For #futurefridays I was asked a question: “Does the Future of Work still need Lean Six Sigma?”

Well, based on my own personal experience, it’s a big YES.

You see, Lean Six Sigma is simply a structured problem-solving approach.

So if you have a problem that requires root-cause-analysis before you find the appropriate solution, then Lean Six Sigma’s DMAIC approach is what you need.

You can’t just implement some sort of software automation or that snazzy new technological tool just for the sake of it.

To put it bluntly, Automations and such Technologies are part of the Solutions. But to uncover the root causes of the problems that require these solutions, that’s where you would need Lean Six Sigma.

So if anything, Lean Six Sigma is actually becoming a foundational skill that continuous process improvement and innovation experts of today must master.

Let me give you an example.

Jay Arthur, author of 2 books on Lean Six Sigma, seems to agree. Based on this article that he’s written, he says that he has observed companies taking a more Agile approach to problem solving, and Lean Six Sigma is part of it all.

In the article, he shares the phases based on his observations:

  • Prework (to me, this looks like the Define and Measure phases)
  • 1 or 2 day training (this is where they undergo Lean Six Sigma Yellow Belt Training)
  • Post-work (this looks like where the Analyze, Improve, and Control phases happen)
  • Side benefits (apart from quantifying the benefits of each project, they are also able to identify participants who can potentially become the next Green or Black Belt candidates).

In case you haven’t noticed, Hackathons that are meant to solve problems loosely follow this approach as well.

But having the various Lean Six Sigma tools in your arsenal will ensure you are solving for the right root causes, and not implementing complex technological innovations just for the technical difficulty points and impressiveness factors.

So there you have it. I hope you learned something from this.

If you want to learn more, download a copy of my book for FREE (for a limited time only) – The Business Optimization Blueprint, and learn how you can improve your business process and take it to the next level.

The post Does the Future of Work need Lean Six Sigma? appeared first on Robbie Agustin.

]]>
1909
DEMYSTIFIED: What’s the difference between Classification and Regression? https://robbieagustin.com/demystified-whats-the-difference-between-classification-and-regression/ https://robbieagustin.com/demystified-whats-the-difference-between-classification-and-regression/#respond Sat, 18 Jan 2020 06:29:48 +0000 https://robbieagustin.com/?p=1911 For today’s #futurefridays I’m going to answer a question that confuses a lot of people trying to learn Data Science and Machine Learning. The question is “What’s the difference between Classification and Regression?” Let me give a shot at this with a simple explanation and example. Think about the output that you want to achieve. …

The post DEMYSTIFIED: What’s the difference between Classification and Regression? appeared first on Robbie Agustin.

]]>

For today’s #futurefridays I’m going to answer a question that confuses a lot of people trying to learn Data Science and Machine Learning.

The question is “What’s the difference between Classification and Regression?”

Let me give a shot at this with a simple explanation and example.

Think about the output that you want to achieve.

If you have a dataset, and the output you want to get are labels or categories, then it’s classification.

However if the output you want to get is a numerical value based on computations done on the dataset, then it’s regression.

Let me give you a specific example.

Facebook uses both classification and regression in their algorithms.

The difference is their where they are used.

Classification

When you post on Facebook and upload an image, and it has faces of people who are Facebook users, the algorithm is able to determine who these people are, and tag their names accordingly. What’s happening there is called Classification.

Regression

Now when people like, comment, and share the post, the algorithm uses that to determine how “viral” the post is, and if it should be shown to other people’s feeds or not. This is usually based on count data and a corresponding weight for behavior or action. What’s happening there is called Regression.

Ending note

If course this is an oversimplification of Classification and Regression. There’s more to it than that such as contextual analytics, quality of content, determining if it’s a human or a bot, etc.

But my objective here was to demystify things and help shed some light on the topic.

I hope you learned something new today.

If you want to learn how to become a Data Scientist, some of the best learning tracks I’ve seen are the Data Science learning paths from DataCamp.

You can start learning today even for free. Check it out! Click the link below.

Data Science at DataCamp

The post DEMYSTIFIED: What’s the difference between Classification and Regression? appeared first on Robbie Agustin.

]]>
https://robbieagustin.com/demystified-whats-the-difference-between-classification-and-regression/feed/ 0 1911
Artificial Intelligence Basics: What is Natural Language Processing? https://robbieagustin.com/artificial-intelligence-basics-what-is-natural-language-processing/ Fri, 10 Jan 2020 17:31:05 +0000 https://robbieagustin.com/?p=1756 and how is it used in the real world

The post Artificial Intelligence Basics: What is Natural Language Processing? appeared first on Robbie Agustin.

]]>

It’s #futurefridays and today we’ll be talking about the basics of Artificial Intelligence.

Previously, I talked about Computer Vision. But today we’ll tackle another field of Artificial Intelligence called Natural Language Processing.

Wikipedia defines Natural Language Processing as:

Natural language processing (NLP) is a subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data.

Let me show you some examples of how Natural Language Processing is being used in real-world scenarios.

But first, I would recommend you watch this video from Accenture that gives a really simple explanation of what Natural Language Processing is, in 1 just minute.

AI 101: What is Natural Language Processing?

Video © Accenture

Now that you have a basic understanding of what Natural Language Processing or NLP is (not to be confused with Neuro-Linguistic Programming), allow me to give some examples of NLP in action today.

Predictive Text

Some of the most basic forms of NLP would have to be predictive text, including auto-correct and auto-complete.

Nowadays you can even teach predictive text new words, and based on your usage pattern, it adds it into the algorithm using machine learning.

Speech to Text

Isn’t it wonderful to have the technology nowadays where you can dictate stuff to your mobile phone, and it will type everything down for you?

What’s brilliant is that it works just as well, even if you have an accent!

Translation

Natural Language Processing has paved the way towards bridging language barriers.

A family friend recently took a trip to Japan. And they did not know how to speak a word of Japanese!

How did they get to enjoy the trip in a foreign country where you don’t understand the language? It’s all thanks to Natural Language Processing.

I’ve posted another article that talks about this in detail. I recommend you give it a read – Google Translate: Travel without Fear of Language Barriers (and How to Build your own Translator!)

OCR with Automatic Indexing and Classification

This works well with businesses that deal with documents that contain unstructured or free-format data.

You use Optical Character Recognition to convert the document into machine-readable format, then use Natural Language Processing to understand the context, and then extract the necessary information and/or categorize the documents accordingly.

I’ve also posted another article that talks about this in detail. Check it out – How can OCR (Optical Character Recognition) become AI (Artificial Intelligence)? Here’s how.

Sentiment Analysis

As with the example above, you can use Natural Language Processing to understand the context behind what customers are saying.

This can be used to understand customer sentiments in stuff like survey responses, feedback or complaint channels, social media, voice calls, and so forth.

Chat BOTs and Virtual Assistants

Chat BOTs and Virtual Assistants like Siri, Alexa, and Google Assistant are probably some of the most complex use cases of Natural Language Processing.

This is probably a whole topic in itself. If you’d want to find out more, let me know by replying with a comment.

Ending Note

I hope you found this to be of value, and gave you a sense of what Natural Language Processing is and where it is being utilized in real-world scenarios.

If you have any questions, reply with a comment. I’d love to hear from you.

And if you think this is helpful and you’d want to get updates on the next article, subscribe for updates, and get a free copy of my book – The Business Optimization Blueprint.

The post Artificial Intelligence Basics: What is Natural Language Processing? appeared first on Robbie Agustin.

]]>
1756
Turn Your Car into a 600hp Electric Vehicle in 2020 https://robbieagustin.com/turn-your-car-into-a-600hp-electric-vehicle-in-2020/ https://robbieagustin.com/turn-your-car-into-a-600hp-electric-vehicle-in-2020/#respond Fri, 13 Dec 2019 17:57:28 +0000 https://robbieagustin.com/?p=1738 If you're iffy about how the Cybertruck looks, then this is for you.

The post Turn Your Car into a 600hp Electric Vehicle in 2020 appeared first on Robbie Agustin.

]]>

For today’s #futurefridays, the car enthusiast in me has its juices flowing.

With all they hype about the new Cybertruck, people either love it or hate it. And the hate is mostly about the design.

So what if you can pick any car you want, based on how you like the design, and then just make it go electric. Wouldn’t that be awesome?

That’s exactly what the wizards at Stard Advanced Research and Development in Austria have done.

And it’s going to go full swing in 2020 racing at the World Rallycross Championship.

Imagine having the power to go 0-60 in 1.8 seconds. That’s faster than an F1 car!

The kit that they’ve developed has an all-wheel-drive drivetrain powered by 3 electric motors – 1 in front and 3 at the back, giving the car a total of 600hp, and a staggering 1100N-m of torque instantly available within 1 millisecond.

Even private teams can build their own electric rallycross cars based on the kit technology and the technical systems provided by Stard for a full and practical integration into these cars.

Check out the video. If you’re a true car guy, it’s going to be one of the best 4 minutes and 17 seconds of your life. You’ll thank me later.

Video © Lovecars

What I love about what Stard has done is that they manufactured it as a kit. Which means you can work with them for integrations with just about any car imaginable.

I bet if this becomes highly successful, and I’m pretty sure it will, that this will become an option for people who modify cars.

Ever thought about putting an LS V8 into your Mazda RX-7 track/drift car or Nissan Patrol offroader? Why not go electric!

Thinking about getting a Cybertruck but the look hasn’t grown on you? Turn your Toyota into an Electric Vehicle!

Driving your Supra and this Ferrari guy mocks you? Send all 1100N-m of torque into the pavement and leave him in your wake. You’ll just have to get used to not having those turbo blow-off valve sounds. LOL!

So tell me, what’s your favorite car, and if you had the chance, would you make it go electric?

The post Turn Your Car into a 600hp Electric Vehicle in 2020 appeared first on Robbie Agustin.

]]>
https://robbieagustin.com/turn-your-car-into-a-600hp-electric-vehicle-in-2020/feed/ 0 1738
Artificial Intelligence Basics: What is Computer Vision? https://robbieagustin.com/artificial-intelligence-basics-what-is-computer-vision/ Fri, 06 Dec 2019 16:02:45 +0000 https://robbieagustin.com/?p=1677 and where is it being used in the real world

The post Artificial Intelligence Basics: What is Computer Vision? appeared first on Robbie Agustin.

]]>

It’s #futurefridays and today we’ll be talking about the basics of Artificial Intelligence.

If this catches on, I might even turn this into a series!

For this first installment, I’ll be talking about Computer Vision.

Wikipedia defines Computer Vision as:

Computer vision is an interdisciplinary scientific field that deals with how computers can be made to gain high-level understanding from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do.

Let me show you some examples of how Computer Vision is being used in real-world scenarios.

But first, I would recommend you watch this video from Accenture that gives a really simple explanation of what Computer Vision is, in 1 just minute.

AI 101: What is Computer Vision?

Video © Accenture

Now that you have a basic understanding of what Computer Vision is, allow me to give some examples of Computer Vision in action today.

OCR (Optical Character Recognition)

This technology isn’t new.

OCR is simply the ability to have a machine convert text from a scanned document or image into machine readable format.

Once it’s in a format that a computer can recognize, the possibilities are vast.

For example, the Google Translate app allows you to walk around in a foreign country and understand the signs, even when it’s typed in a language you don’t understand.

Simply open the Google Translate app, point your camera at the sign, and it will translate the sign into a language you can understand.

I talked about OCR this in detail in a previous article – How can OCR (Optical Character Recognition) become AI (Artificial Intelligence)? Here’s how.

I suggest you give that a read.

Facial Recognition

Have you ever wondered how Facebook and Google Photos are able to instantly know who the people are in the photos that you upload?

That’s Facial Recognition in action.

Since there’s already an existing database of photos of people in their social media profiles, the Googles and Facebooks of the world can pattern-match who’s who whenever new photos are uploaded online.

This is the same technology being used in some of the spy movies that you get to watch, where they can pause a CCTV recording to get a snapshot of someone’s face, and run it through Facial Recognition software to identify who that person is, even when they’re wearing a disguise.

Facial Recognition has become more advanced lately where computers are now able to tell sentiments or emotions based on your facial expression e.g. looking happy, surprised, etc.

Image Recognition

If you haven’t tried using Google Lens then you might have been living under a rock for some time.

Google Glass may have made you look dorky but Google Lens allows you to use your mobile phone’s camera to do wonders.

For example, when my wife was getting into succulents as a hobby, it was difficult for us to know the various species of succulents whenever we saw some.

What made the learning curve faster was through the help of Google Lens.

Simply take a photo of the succulent, and Google Lens will scrape the internet for all information relevant to that particular succulent, including the scientific name, rarity, origin, in what type of weather or environment it grows in, how to take care of it, and so forth.

You can use Google Lens to get information about things you see, and what to know more about.

Saw a watch or any other item you like and want to know more about it and where to get it? Use Google Lens.

Combining Image Recognition (or other forms of Computer Vision) with other sensors can yield wonders as well.

For example, there may be many replicas of the Eiffel Tower in many places all over the globe.

You can take a picture of the Eiffel Tower and by using geo-location, you can tell if that’s the real Eiffel Tower if your coordinates indicate that you’re in Paris standing in front of the Eiffel Tower.

Biometric Authentication

Fingerprint, Iris, and Facial Recognition, which are sometimes combined with Voice Recognition and even DNA matching.

These types of Computer Vision are usually being used in the security industry.

These things have also become a basic feature for unlocking our mobile phones.

Yes, that’s Computer Vision in action.

Driver Assist and Self-Driving Cars

Ah yes, driver assist and self-driving cars.

From lane and obstacle detection, to self-parking cars, to fully-automated self-driving cars.

This field is probably one of the most comprehensive uses of Computer Vision.

The reason is because all of the previous examples I mentioned of Computer Vision are all typically based on a flat 2-dimensional image.

For self-driving cars, you need 3D. You need to give Computer Vision a sense of depth. Especially because you are moving forward, really fast.

One way to do that is by using LIDAR, which stands for Light Detection and Ranging.

I won’t be explaining it in detail here, though you can Google it if you’d like, but in a nutshell, this is achieved through lasers to give the Computer a sense of range and depth.

Elon Musk uses this technology as well to dock their SpaceX rocket ships.

But for Tesla, on the other hand, instead of LIDAR , they use 8 cameras, 12 ultrasonic sensors (for near-field “vision”), and a forward RADAR (because that’s the only direction where you’re moving really fast).

The RADAR helps the cameras get a better sense of depth and range, even in the non-visible spectrum, such as fog, snow, smoke, and so forth.

Ending Note

I hope you found this to be of value, and gave you a sense of what Computer Vision is and where it is being utilized in real-world scenarios.

If you have any questions, reply with a comment. I’d love to hear from you.

And if you think this is helpful and you’d want to get updates on the next article, subscribe for updates, and get a free copy of my book – The Business Optimization Blueprint.

The post Artificial Intelligence Basics: What is Computer Vision? appeared first on Robbie Agustin.

]]>
1677
Innovation: How the Deaf-Blind can use a Mobile Phone. Or can they? https://robbieagustin.com/innovation-how-the-deaf-blind-can-use-a-mobile-phone-or-can-they/ https://robbieagustin.com/innovation-how-the-deaf-blind-can-use-a-mobile-phone-or-can-they/#respond Fri, 29 Nov 2019 11:51:48 +0000 https://robbieagustin.com/?p=1602 if your eyes don't swell in tears, you have no emotion

The post Innovation: How the Deaf-Blind can use a Mobile Phone. Or can they? appeared first on Robbie Agustin.

]]>

It’s #FutureFridays and today we talk about one of the latest innovations from tech giant Samsung.

Imagine, what if you are living in this current age, but you lack the ability to see the world around you, hear the people talk to you, nor speak back to your family or loved ones.

What would your world be like?

That’s the kind of world a Deaf-Blind person lives in.

Thankfully, Samsung has come up with their latest innovation to solve this problem.

Read on to see how it works, and watch the video demo. I assure you, it won’t disappoint.

And if your eyes don’t swell in tears after watching it, you have no emotion.

The app is called Samsung Good Vibes.

It translates Morse Code into text or voice and vice versa.

If you are Deaf-Blind, you communicate using Morse Code, which is the interface you will be using on the app.

A short tap represents a dot, and a long press represents a dash.

People you are communicating with who can see and hear can receive messages and respond back via text or voice, which the app will convert into Morse Code vibrations for the Deaf-Blind to understand.

Check out the video below.

The Samsung Good Vibes App

Video ©Samsung India

Ending Note

Now of you want to learn the skillsets that will allow you to Innovate with a designer’s mindset, even if you’re not a designer, here’s how you can take action:

The post Innovation: How the Deaf-Blind can use a Mobile Phone. Or can they? appeared first on Robbie Agustin.

]]>
https://robbieagustin.com/innovation-how-the-deaf-blind-can-use-a-mobile-phone-or-can-they/feed/ 0 1602
How can OCR (Optical Character Recognition) become AI (Artificial Intelligence)? Here’s how. https://robbieagustin.com/how-can-ocr-optical-character-recognition-become-ai-artificial-intelligence-heres-how/ https://robbieagustin.com/how-can-ocr-optical-character-recognition-become-ai-artificial-intelligence-heres-how/#comments Fri, 22 Nov 2019 21:54:22 +0000 https://robbieagustin.com/?p=1530 featuring Paradatec and AWS SageMaker Ground Truth

The post How can OCR (Optical Character Recognition) become AI (Artificial Intelligence)? Here’s how. appeared first on Robbie Agustin.

]]>

Alright, it’s #FutureFridays and today’s topic is OCR (Optical Character Recognition) and AI (Artificial Intelligence).

OCR has been around for quite some time already, and is widely considered as mainstream technology.

OCR is simply the ability to have a machine convert text from a scanned document or image into machine readable format.

In a sense, it is a simple form of “Artificial Intelligence,” but because it has become mainstream technology already, most people no longer consider it to be a subset of AI, even though technically, it is, as it is part of Computer Vision.

How OCR really becomes AI is dependent on what you do with the machine readable data after the conversion process.

Let me show you an example of a how a company took OCR technology to the next level with AI.

Paradatec’s Prosar AIDA Advanced OCR Technology

Video ©Paradatec Inc.

In this video you may have noticed a couple of things happening.

After the documents are scanned and OCR does its work, what happens is the software reads the contents and compares them with its training data as reference.

Training data is the information it has learned thus far. For example, if you show it thousands and thousands of title documents, if it sees a title document, it will know what a title document looks like when it sees it.

From there it is able to determine the document type, and classify it accordingly, even for unstructured documents i.e. not templated.

If it doesn’t recognize the document, or it isn’t sure what it is, it can exclude the document from the workflow and give it to a human who knows how to identify the document and index (manually classify) it accordingly.

This can also “teach” the machine learning algorithm, and the new information taught by the human can be added into its training data as needed, so that with enough samples, it can start recognizing and classifying the way it was taught.

Moreover, the data can be extracted from the documents and transferred into further downstream processes accordingly.

This is pretty similar to how Amazon’s AWS SageMaker Ground Truth AI Labeling platform works, which creates training datasets for Machine Learning.

Introducing AWS SageMaker Ground Truth

Video ©Amazon Web Services

Once you learn the terminologies it will all make sense.

Classification = How AI identifies what the data is and therefore classifies it accordingly.

Whereas Labeling = Indexing = How humans identify data, which the machine learning algorithm learns from.

For example, telling the difference between a dog and a cow, or when an MRI has an anomaly or not.

Humans usually combine datasets as well to make a decision before labeling accordingly, like by using sensors, geo-location, thermal imaging, researching reference materials, etc.

The machine learning algorithm can learn from this as well so it can classify accordingly once it is fed the training data.

That’s it! I hope you found this to be of value.

If you have any questions or feedback, feel free to reply with a comment. I would love to hear from you.

The post How can OCR (Optical Character Recognition) become AI (Artificial Intelligence)? Here’s how. appeared first on Robbie Agustin.

]]>
https://robbieagustin.com/how-can-ocr-optical-character-recognition-become-ai-artificial-intelligence-heres-how/feed/ 1 1530