ChatGPT
#31
Hi Mad Science - ya, good points all BUT, the bad side of life has always been with us, including misinformation and unfounded facts & figures. And mostly it's all driven by power and money & muscle and might. 

The thing about science however is proof and accuracy driven. ChatGPT for example apparently has a routine which looks for fraudulent or unfounded facts. I'm not that up on it but if that's true, then future versions of ChatGPT will come out with ways to improve it's accuracy otherwise what good is it in light of all you have described as it's limitations today. The greatest threat to ChatGPT, IMHO as Tempodi puts it, will be the governments of the world. The potential of Generative AI has significant consequences to all occupations and all financial institutes that are the cornerstones of todays society.

I do not have a copy of the ChatGPT program so a lot, if not all, I say is just coming from the many, many books I have read on AI and it's potential. No one can predict the future accurately (as our weather apps demonstrate) but ChatGPT, in its' present state, is just scratching the surface of what it could become.
Reply
#32
(05-03-2023, 03:53 PM)Dimster Wrote: Hi Mad Science - ya, good points all BUT, the bad side of life has always been with us, including misinformation and unfounded facts & figures. And mostly it's all driven by power and money & muscle and might. 

The thing about science however is proof and accuracy driven. ChatGPT for example apparently has a routine which looks for fraudulent or unfounded facts. I'm not that up on it but if that's true, then future versions of ChatGPT will come out with ways to improve it's accuracy otherwise what good is it in light of all you have described as it's limitations today. The greatest threat to ChatGPT, IMHO as Tempodi puts it, will be the governments of the world. The potential of Generative AI has significant consequences to all occupations and all financial institutes that are the cornerstones of todays society.

I do not have a copy of the ChatGPT program so a lot, if not all, I say is just coming from the many, many books I have read on AI and it's potential. No one can predict the future accurately (as our weather apps demonstrate) but ChatGPT, in its' present state, is just scratching the surface of what it could become.

This technology does have some great potential - just wait 'til they perfect the part that verifies accuracy and put it on the job of curing cancer, preventing wars & financial crises, etc.! Like you said, the danger of government meddling is there - who knows how it's already being used? 

It's just that when we talk about turning over teaching and people's education almost exclusively to these kinds of systems, my spider sense starts tingling. Real human interaction and experience is important, not only for safety, but for a richer more meaningful life for everyone.

Let's keep up the conversations about this stuff, and hopefully contribute to making it better and avoiding the pitfalls!
Reply
#33
All these AI assistants are there to gather data on people for nefarious purposes. It's all just a matter of who you're giving your info to.
Schuwatch!
Yes, it's me. Now shut up.
Reply
#34
(05-03-2023, 05:02 PM)madscijr Wrote: Let's keep up the conversations about this stuff, and hopefully contribute to making it better and avoiding the pitfalls!

I agree, and in this sub-forum, if it has to do with programming in QB64. Otherwise this topic should be in Off-Topic, although it would limit who could see it and post.

(Raises hand) I'm also making a post not having to do with QB64 programming but because I needed to get something out. The ChatGPT is already being used for nefarious purposes like I have explained earlier. Enough dishonest people claim to make a living by doing things like that, when they need to sell goods but try to maximize the profits. Which usually makes them balk at paying someone else to handle customer support.

The TuxBot experiment was designed to get a few laughs although some of its advice is surprisingly accurate and clear. Although none of the code examples the bot provides should be employed in a mission-critical setting. But limit the ChatGPT "experiment" to being just that, to having an entertainment value. Otherwise it's boring and immoral, should be condemned and nipped at the bud.

But what a shame the "openai" site now requires an user to have an account. Likely they changed the format, too, for interacting with the pseudo-philosopher. Might be a job for QBJS to extract text that cannot be provided in HTML or XML for security reasons. At least I will never know. If somebody does have an account with that other site, it would be nice if he/she could give a curt explanation of how the interaction is like. Testing like that could help the HTTP functionality of QB64PE get out of $UNSTABLE status.
Reply
#35
(05-03-2023, 05:41 PM)Ultraman Wrote: All these AI assistants are there to gather data on people for nefarious purposes. It's all just a matter of who you're giving your info to.

It is true that the info they gather COULD be used for nefarious purposes. The truth is, we don't know the motivations behind these systems, and even if they are harmless, that's no guarantee that the info won't fall into the wrong hands in the future.

Lately when I read reddit, I see lots of these very personal questions being posted, and people answer them. These could be being asked by people out of genuine curiosity, or it could be someone methodically collecting data on people, who knows. 

We definitely want to tread lightly and be very careful about what information we share online, because you never know!
Reply
#36
(05-03-2023, 05:41 PM)Ultraman Wrote: All these AI assistants are there to gather data on people for nefarious purposes. It's all just a matter of who you're giving your info to.

Yep, they can be used for that for sure. But this line in your post, "It's all a matter of who you're giving your info to" is most relevant. This has been true since before the age of modern data-mining. These systems *might* be helpful in the future but we need to educate everyone on how to use them. That's going to be tough though. Home computers have been around for over 48 years now and MOST people still don't know how to properly use them. The Internet (WWW), exactly 30 years as of today, and MOST people have no idea how to use or protect themselves on it. It's going to be an uphill battle. These so called AI systems (they're not by the way) are not going to go away. Pandora has been opened and data-miners are wetting themselves left and right over it.
Software and cathedrals are much the same — first we build them, then we pray.
QB64 Tutorial
Reply
#37
(05-03-2023, 08:43 PM)TerryRitchie Wrote:
(05-03-2023, 05:41 PM)Ultraman Wrote: All these AI assistants are there to gather data on people for nefarious purposes. It's all just a matter of who you're giving your info to.
Yep, they can be used for that for sure. But this line in your post, "It's all a matter of who you're giving your info to" is most relevant. 
The only problem with this is, we might give our info to Company X who do nothing bad with it, but then get bought out or taken over by Company Y, who have a different policy! 
Or, Company X keeps our data private, but then gets hacked by Bad Actor. I'm just saying! 

For that reason, we must assume our info will be available to Bad Actor, before we even share it. 

That makes using social media tricky - people share their whole lives on countless different platforms, the only comfort being that you are among millions of people, so what makes you any more likely to be targeted than someone else? But with AI emerging and the ability of computers to process huge amounts of information, the risk of everyone being targeted increases.

Paranoidedly yours!
Reply
#38
Ha!  I just bought a new iPad, and I can't even get it to share the apps from the old one to the new one.  Information sharing/hacking is overhyped.  Tongue
Reply
#39
(02-22-2023, 03:16 AM)bplus Wrote: Yeah I balked too at signing up having paranoid thoughts about the powers of AI.

Just don't ask if it can make a better copy of itself.

Yes, that's a bit scarey!
Of all the places on Earth, and all the planets in the Universe, I'd rather live here (Perth, W.A.) Big Grin
Reply
#40
AI online is malevolent. Local AI instances are good. Use GPT4ALL, which is crossplatform. It has tons of models all trained for different things. If you have a fast computer, you can get quick responses. Slower computers will be slow as molasses on certain models.
Ask me about Windows API and maybe some Linux stuff
Reply




Users browsing this thread: 1 Guest(s)