From debchandler411 at gmail.com Thu Mar 9 03:10:08 2023 From: debchandler411 at gmail.com (Deborah Chandler) Date: Wed, 8 Mar 2023 22:10:08 -0500 Subject: [Hidden-tech] * * * Need to identify a font * * * In-Reply-To: References: Message-ID: Hi Chris and everyone, Thank you so much for each of your attempts at identifying the font! I truly appreciate the team effort. I have found another similar font and designed a logo that the client really likes. Thanks again! Deb On Mon, Feb 27, 2023 at 9:59?PM Chris A via Hidden-discuss < hidden-discuss at lists.hidden-tech.net> wrote: > Hi Deb, > > No luck on an exact match on Whatthefont, fontsquirrel or Whatfontis (all > .com's), several very close fonts, with Combustible Italic and MB > Edwardsson being the closest. But neither are it. > > It's very likely a custom vector font or actual digitized handwriting, > certainly some designers do this kind of custom font design. > > Sorry I couldn't help! > > - Chris > > > On 2/25/23 6:00 PM, Deborah Chandler via Hidden-discuss wrote: > > Hi folks, > > Could someone either: > 1) Help me identify the font used in the logo for Master Chef; see > https://www.masterchefsh.com. > (And because it is small and hard to see, and I know we can't attach a > file to a group email, I am happy to send a larger black and white copy of > this logo upon request) > 2) Tell me your favorite font identifying website. I have tried about 4 of > them, and still can't find this font. > > Thanks! > Deb > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.netHidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site.http://www.hidden-tech.net/members > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > -------------- next part -------------- An HTML attachment was scrubbed... URL: From debchandler411 at gmail.com Thu Mar 9 03:07:43 2023 From: debchandler411 at gmail.com (Deborah Chandler) Date: Wed, 8 Mar 2023 22:07:43 -0500 Subject: [Hidden-tech] * * * Placing image file as content of email * * * Message-ID: Hi folks, Could someone please confirm this process for me: My client wants to take an event flyer I have designed and use it as the full page image in an email. I just tested this in my Gmail, and was able to do it by using "Insert Photo" from the menu. But a few questions: 1) Is the upload process going to be the same for Yahoo and Crocker users? (My clients' email addresses are with those two...) 2) Does the size that it shows up in the recipients' email depend on the original file size (pixel dimensions) or the recipient's screen size, and whether phone or computer? Thanks! Deb -------------- next part -------------- An HTML attachment was scrubbed... URL: From yudkinyudkin at yahoo.com Thu Mar 9 20:29:36 2023 From: yudkinyudkin at yahoo.com (Marcia Yudkin) Date: Thu, 9 Mar 2023 20:29:36 +0000 (UTC) Subject: [Hidden-tech] Question about ChatGPT and machine learning References: <196290060.971490.1678393776863.ref@mail.yahoo.com> Message-ID: <196290060.971490.1678393776863@mail.yahoo.com> This question is for anyone who understands how the machine learning in ChatGPT works. I've been finding ChatGPT useful for summarizing information that is widely dispersed around the web, such as questions like "what are the most popular objections to X?"? However, the other day for a blog post I was writing I asked it "What are some sources on the relationship of X to Y?"? It gave me four sources of information, including the article title, where it was published and who wrote it.?? This looked great, especially since I recognized two of the author names as authorities on X.? However, when I then did a Google search, I could not track down any of the four articles, either by title, author or place of publication.? I tried both in Google and in Bing.? Zilch! Could ChatGPT have totally made up these sources?? If so, how does that work? I am baffled about the explanation of this.? One of the publications involved was Psychology Today, so we are not talking about obscure corners of the Internet or sites that would have disappeared recently. Thanks for any insights. Marcia Yudkin Introvert UpThink Introvert UpThink | Marcia Yudkin | Substack Introvert UpThink | Marcia Yudkin | Substack Marcia Yudkin Exploring how introverts are misunderstood, maligned and underappreciated in our culture - yet still thrive. Cli... From yudkinyudkin at yahoo.com Thu Mar 9 23:35:08 2023 From: yudkinyudkin at yahoo.com (Marcia Yudkin) Date: Thu, 9 Mar 2023 23:35:08 +0000 (UTC) Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> Message-ID: <217866716.1017185.1678404908381@mail.yahoo.com> Hi Viet, I did ask for the URLs but the reply was that it was not a search engine and therefore not able to provide links. When I inquired again just now, the ChatGPT replied that maybe the articles have been removed from the Internet and archived.? That doesn't really answer my question, though.? How was ChatGPT able to access them if I am not?? Is it that it was fed a version of the Internet from a couple of years ago? On Thursday, March 9, 2023 at 01:08:50 PM HST, Viet Trang wrote: Marcia, try asking ChatGPT itself whether the sources provided are factual or fictional, and for URL links if factual. It should be able to handle those kinds of prompts. On Thu, Mar 9, 2023 at 4:59?PM Marcia Yudkin via Hidden-discuss wrote: > This question is for anyone who understands how the machine learning in ChatGPT works. > > I've been finding ChatGPT useful for summarizing information that is widely dispersed around the web, such as questions like "what are the most popular objections to X?"? However, the other day for a blog post I was writing I asked it "What are some sources on the relationship of X to Y?"? It gave me four sources of information, including the article title, where it was published and who wrote it.?? > > This looked great, especially since I recognized two of the author names as authorities on X.? However, when I then did a Google search, I could not track down any of the four articles, either by title, author or place of publication.? I tried both in Google and in Bing.? Zilch! > > Could ChatGPT have totally made up these sources?? If so, how does that work? > > I am baffled about the explanation of this.? One of the publications involved was Psychology Today, so we are not talking about obscure corners of the Internet or sites that would have disappeared recently. > > Thanks for any insights. > > Marcia Yudkin > Introvert UpThink > Introvert UpThink | Marcia Yudkin | Substack > > > > > > Introvert UpThink | Marcia Yudkin | Substack > ?Marcia Yudkin > ?Exploring how introverts are misunderstood, maligned and underappreciated in our culture - yet still thrive. Cli...? ? > > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > From viet.q.trang at gmail.com Thu Mar 9 23:08:41 2023 From: viet.q.trang at gmail.com (Viet Trang) Date: Thu, 9 Mar 2023 18:08:41 -0500 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <196290060.971490.1678393776863@mail.yahoo.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> Message-ID: Marcia, try asking ChatGPT itself whether the sources provided are factual or fictional, and for URL links if factual. It should be able to handle those kinds of prompts. On Thu, Mar 9, 2023 at 4:59?PM Marcia Yudkin via Hidden-discuss < hidden-discuss at lists.hidden-tech.net> wrote: > This question is for anyone who understands how the machine learning in > ChatGPT works. > > I've been finding ChatGPT useful for summarizing information that is > widely dispersed around the web, such as questions like "what are the most > popular objections to X?" However, the other day for a blog post I was > writing I asked it "What are some sources on the relationship of X to Y?" > It gave me four sources of information, including the article title, where > it was published and who wrote it. > > This looked great, especially since I recognized two of the author names > as authorities on X. However, when I then did a Google search, I could not > track down any of the four articles, either by title, author or place of > publication. I tried both in Google and in Bing. Zilch! > > Could ChatGPT have totally made up these sources? If so, how does that > work? > > I am baffled about the explanation of this. One of the publications > involved was Psychology Today, so we are not talking about obscure corners > of the Internet or sites that would have disappeared recently. > > Thanks for any insights. > > Marcia Yudkin > Introvert UpThink > Introvert UpThink | Marcia Yudkin | Substack > > > > > > Introvert UpThink | Marcia Yudkin | Substack > Marcia Yudkin > Exploring how introverts are misunderstood, maligned and underappreciated > in our culture - yet still thrive. Cli... > > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > -------------- next part -------------- An HTML attachment was scrubbed... URL: From viet.q.trang at gmail.com Thu Mar 9 23:58:46 2023 From: viet.q.trang at gmail.com (Viet Trang) Date: Thu, 9 Mar 2023 18:58:46 -0500 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <217866716.1017185.1678404908381@mail.yahoo.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <217866716.1017185.1678404908381@mail.yahoo.com> Message-ID: Interesting. I've asked it similar prompts in the past and it was able to provide links. Its response to you implies that the articles were scraped from the Internet in the past. Continuing your search on other engines may be helpful if your goal is to find the articles. Good luck! On Thu, Mar 9, 2023, 18:35 Marcia Yudkin wrote: > Hi Viet, > > I did ask for the URLs but the reply was that it was not a search engine > and therefore not able to provide links. > > When I inquired again just now, the ChatGPT replied that maybe the > articles have been removed from the Internet and archived. That doesn't > really answer my question, though. How was ChatGPT able to access them if > I am not? Is it that it was fed a version of the Internet from a couple of > years ago? > > > > > > > On Thursday, March 9, 2023 at 01:08:50 PM HST, Viet Trang < > viet.q.trang at gmail.com> wrote: > > > > > > Marcia, try asking ChatGPT itself whether the sources provided are factual > or fictional, and for URL links if factual. It should be able to handle > those kinds of prompts. > > On Thu, Mar 9, 2023 at 4:59?PM Marcia Yudkin via Hidden-discuss < > hidden-discuss at lists.hidden-tech.net> wrote: > > This question is for anyone who understands how the machine learning in > ChatGPT works. > > > > I've been finding ChatGPT useful for summarizing information that is > widely dispersed around the web, such as questions like "what are the most > popular objections to X?" However, the other day for a blog post I was > writing I asked it "What are some sources on the relationship of X to Y?" > It gave me four sources of information, including the article title, where > it was published and who wrote it. > > > > This looked great, especially since I recognized two of the author names > as authorities on X. However, when I then did a Google search, I could not > track down any of the four articles, either by title, author or place of > publication. I tried both in Google and in Bing. Zilch! > > > > Could ChatGPT have totally made up these sources? If so, how does that > work? > > > > I am baffled about the explanation of this. One of the publications > involved was Psychology Today, so we are not talking about obscure corners > of the Internet or sites that would have disappeared recently. > > > > Thanks for any insights. > > > > Marcia Yudkin > > Introvert UpThink > > Introvert UpThink | Marcia Yudkin | Substack > > > > > > > > > > > > Introvert UpThink | Marcia Yudkin | Substack > > Marcia Yudkin > > Exploring how introverts are misunderstood, maligned and > underappreciated in our culture - yet still thrive. Cli... > > > > > > _______________________________________________ > > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > > Hidden-discuss at lists.hidden-tech.net > > > > You are receiving this because you are on the Hidden-Tech Discussion > list. > > If you would like to change your list preferences, Go to the Members > > page on the Hidden Tech Web site. > > http://www.hidden-tech.net/members > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan at 8wheels.org Fri Mar 10 02:10:42 2023 From: alan at 8wheels.org (Alan Frank) Date: Thu, 09 Mar 2023 21:10:42 -0500 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <196290060.971490.1678393776863@mail.yahoo.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> Message-ID: <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> ChatGPT is a conversation engine, not a search engine. It is designed to provide plausible responses based on similarity of questions and answers to existing material on the internet, without attempting to correlate its responses with actual facts. Pretty much every social media space I follow has had multiple posts from people pointing out ridiculous factual errors from ChatGPT. --Alan -------- Original Message -------- Subject: [Hidden-tech] Question about ChatGPT and machine learning Date: 2023-03-09 15:29 From: Marcia Yudkin via Hidden-discuss To: "Hidden-discuss at lists.hidden-tech.net" This question is for anyone who understands how the machine learning in ChatGPT works. I've been finding ChatGPT useful for summarizing information that is widely dispersed around the web, such as questions like "what are the most popular objections to X?"? However, the other day for a blog post I was writing I asked it "What are some sources on the relationship of X to Y?"? It gave me four sources of information, including the article title, where it was published and who wrote it.?? This looked great, especially since I recognized two of the author names as authorities on X.? However, when I then did a Google search, I could not track down any of the four articles, either by title, author or place of publication.? I tried both in Google and in Bing.? Zilch! Could ChatGPT have totally made up these sources?? If so, how does that work? I am baffled about the explanation of this.? One of the publications involved was Psychology Today, so we are not talking about obscure corners of the Internet or sites that would have disappeared recently. Thanks for any insights. Marcia Yudkin Introvert UpThink Introvert UpThink | Marcia Yudkin | Substack Introvert UpThink | Marcia Yudkin | Substack Marcia Yudkin Exploring how introverts are misunderstood, maligned and underappreciated in our culture - yet still thrive. Cli... _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members From yudkinyudkin at yahoo.com Fri Mar 10 03:14:21 2023 From: yudkinyudkin at yahoo.com (Marcia Yudkin) Date: Fri, 10 Mar 2023 03:14:21 +0000 (UTC) Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> Message-ID: <62601017.1050433.1678418061515@mail.yahoo.com> Yes, I know that people have been pointing out "ridiculous factual errors" from ChatGPT.? ?However, to make up sources that sound completely plausible but are fake seems like it belongs in a whole other category. On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank wrote: ChatGPT is a conversation engine, not a search engine.? It is designed to provide plausible responses based on similarity of questions and answers to existing material on the internet, without attempting to correlate its responses with actual facts.? Pretty much every social media space I follow has had multiple posts from people pointing out ridiculous factual errors from ChatGPT. --Alan -------- Original Message -------- Subject: [Hidden-tech] Question about ChatGPT and machine learning Date: 2023-03-09 15:29 From: Marcia Yudkin via Hidden-discuss To: "Hidden-discuss at lists.hidden-tech.net" This question is for anyone who understands how the machine learning in ChatGPT works. I've been finding ChatGPT useful for summarizing information that is widely dispersed around the web, such as questions like "what are the most popular objections to X?"? However, the other day for a blog post I was writing I asked it "What are some sources on the relationship of X to Y?"? It gave me four sources of information, including the article title, where it was published and who wrote it.?? This looked great, especially since I recognized two of the author names as authorities on X.? However, when I then did a Google search, I could not track down any of the four articles, either by title, author or place of publication.? I tried both in Google and in Bing.? Zilch! Could ChatGPT have totally made up these sources?? If so, how does that work? I am baffled about the explanation of this.? One of the publications involved was Psychology Today, so we are not talking about obscure corners of the Internet or sites that would have disappeared recently. Thanks for any insights. Marcia Yudkin Introvert UpThink Introvert UpThink | Marcia Yudkin | Substack Introvert UpThink | Marcia Yudkin | Substack ? Marcia Yudkin ? Exploring how introverts are misunderstood, maligned and underappreciated in our culture - yet still thrive. Cli... _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members From shel at principledprofit.com Fri Mar 10 11:29:39 2023 From: shel at principledprofit.com (Shel Horowitz) Date: Fri, 10 Mar 2023 06:29:39 -0500 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> Message-ID: Yes. Things can be plausible without being accurate, and that apparently is a huge problem with the chatbot. On Thu, Mar 9, 2023, 9:10 PM Alan Frank via Hidden-discuss < hidden-discuss at lists.hidden-tech.net> wrote: > ChatGPT is a conversation engine, not a search engine. It is designed > to provide plausible responses based on similarity of questions and > answers to existing material on the internet, without attempting to > correlate its responses with actual facts. Pretty much every social > media space I follow has had multiple posts from people pointing out > ridiculous factual errors from ChatGPT. > > --Alan > > > -------- Original Message -------- > Subject: [Hidden-tech] Question about ChatGPT and machine learning > Date: 2023-03-09 15:29 > From: Marcia Yudkin via Hidden-discuss > > To: "Hidden-discuss at lists.hidden-tech.net" > > > This question is for anyone who understands how the machine learning in > ChatGPT works. > > I've been finding ChatGPT useful for summarizing information that is > widely dispersed around the web, such as questions like "what are the > most popular objections to X?" However, the other day for a blog post I > was writing I asked it "What are some sources on the relationship of X > to Y?" It gave me four sources of information, including the article > title, where it was published and who wrote it. > > This looked great, especially since I recognized two of the author names > as authorities on X. However, when I then did a Google search, I could > not track down any of the four articles, either by title, author or > place of publication. I tried both in Google and in Bing. Zilch! > > Could ChatGPT have totally made up these sources? If so, how does that > work? > > I am baffled about the explanation of this. One of the publications > involved was Psychology Today, so we are not talking about obscure > corners of the Internet or sites that would have disappeared recently. > > Thanks for any insights. > > Marcia Yudkin > Introvert UpThink > Introvert UpThink | Marcia Yudkin | Substack > > > > > > Introvert UpThink | Marcia Yudkin | Substack > Marcia Yudkin > Exploring how introverts are misunderstood, maligned and > underappreciated in our culture - yet still thrive. Cli... > > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdmurray at bitdance.com Fri Mar 10 15:03:54 2023 From: rdmurray at bitdance.com (R. David Murray) Date: Fri, 10 Mar 2023 10:03:54 -0500 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <62601017.1050433.1678418061515@mail.yahoo.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> Message-ID: <20230310150355.CAE43B1401C@webabinitio.net> From what I understand (admittedly from only a *basic* understanding of machine learning), it is not so much that ChatGPT is "making errors", but rather that it is "making stuff up", and does not admit that it is making stuff up. I'm going to brain dump what I think here, but I'm not an expert in this by any stretch, so don't take me as an authority. Perhaps this can help you reason about ChartGPT until you find a better expert to consult ;) One thing to understand is that this is a *trained* model. That means that it was given a set of questions and answers and told "these are good, these are bad", probably with a rating of *how* good or bad. Then it was given a lot of other data (and how exactly this gets turned into questions and answers is *way* beyond my knowledge level). Then a team of model trainers started asking questions. The trainers would look at the answers it came up with and rate them, thus adding to the "trained" data set. When you tell ChatGPT that its answer was good or bad, you are also potentially adding to that training data, by the way. I'm guessing that the way the system works there is actually no way for it to "know" that it has made something up. The output that it produces is generated based on what you can think of as a very advanced version of statistical language modelling: given a certain input, what are the most likely kinds of things that would follow as a response? And like any statistical model, when you get enough standard deviations out, things get weird. At no point in the model output are things tagged as "made up" or "not made up": it is *ALL* made up. In the middle of the bell curve the made up things are *much* more likely to be "correct" than out at the edges of the bell curve. But oh those edges... It is of course more sophisticated than a statistical model, but the same principle applies: if there are few examples of *exactly* the kind of data your input contains, then it is going to draw from stuff that is a lot less closely related to your input for its response. But, and here is the important part, it is going to make up *something* to answer with. If a source is mentioned multiple times in the context of your input, it will use it. If there are no sources mentioned in the context of your input, it will generate an output that looks like the *kind of thing* that would be a response to that *kind of input*. In this case that included a list of articles. It generated at least one of them from an author whose name was probably mentioned in the context of your input, but never with an actual article name attached. Or maybe that author was mentioned in the context of conversations containing a subset of the *words* in your input (rather than logically formed sentences), depending on just how fuzzy the match was. Then it effectively made up a plausible sounding article name to go with the author name, because that's what responses to other similar questions in its training data looked like (not similar in content, but similar in *form*). So while I agree that making up all the sources seems like an extreme example of this, ChatGPT is what Science Fiction calls an "Artificial Stupid" (something that can't actually *reason*), and thus I think my explanation is plausible. It just depends on how fuzzy the match was that it made on the input. If the match was very fuzzy, then it would have come back with material from its data that generally followed at least some of your input, and then since responses the trainers considered "good" to questions like that usually included some sources, it made some up based on how the answers to other, less related, questions looked. Anyone want to bet that four sources was the average number that was accepted as "a good answer" by the people who did the training? I know I've seen "four things" in a couple of ChatGPT answers, and I haven't asked it very many questions :) Given all this, there are only two things you can do, one of which is exactly what you did: ask it for the sources. Given *that* input, it should be able to come up with the most likely response being the actual source. If it can't, then it has probably made up the source (note: I have not tested this technique myself, but it follows logically from how I think the system works). The second thing you can do (which you probably also already did) is to rephrase your input, giving it different amounts and kinds of context, and see how the output changes. If your altered input results in a less fuzzy match, you will get better answers. The big takeaway, which you clearly already know, is to never trust anything ChatGPT produces. Use it as a rough draft, but verify all the facts. My fear is that there are going to be a lot of people who aren't as diligent, and we'll end up with a lot of made up information out on the web adding to all of the maliciously bad information that is already out there. I have read that the ChatGPT researchers are worried about how to avoid using ChatGPT's output as input to a later ChatGPT model, and I have no idea how they are going to achieve that! And keep in mind that that maliciously bad information *is part of ChatGPT's data set*. Some of it the people who did the training will have caught, but I'm willing to bet they missed a lot of it because *they* didn't know it was bad, or it never came up during training. --David On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss wrote: > Yes, I know that people have been pointing out "ridiculous factual errors" from ChatGPT.? ?However, to make up sources that sound completely plausible but are fake seems like it belongs in a whole other category. > > > > > > > On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank wrote: > > > > > > ChatGPT is a conversation engine, not a search engine.? It is designed > to provide plausible responses based on similarity of questions and > answers to existing material on the internet, without attempting to > correlate its responses with actual facts.? Pretty much every social > media space I follow has had multiple posts from people pointing out > ridiculous factual errors from ChatGPT. > > --Alan > > > -------- Original Message -------- > Subject: [Hidden-tech] Question about ChatGPT and machine learning > Date: 2023-03-09 15:29 > From: Marcia Yudkin via Hidden-discuss > > To: "Hidden-discuss at lists.hidden-tech.net" > > > This question is for anyone who understands how the machine learning in > ChatGPT works. > > I've been finding ChatGPT useful for summarizing information that is > widely dispersed around the web, such as questions like "what are the > most popular objections to X?"? However, the other day for a blog post I > was writing I asked it "What are some sources on the relationship of X > to Y?"? It gave me four sources of information, including the article > title, where it was published and who wrote it.?? > > This looked great, especially since I recognized two of the author names > as authorities on X.? However, when I then did a Google search, I could > not track down any of the four articles, either by title, author or > place of publication.? I tried both in Google and in Bing.? Zilch! > > Could ChatGPT have totally made up these sources?? If so, how does that > work? > > I am baffled about the explanation of this.? One of the publications > involved was Psychology Today, so we are not talking about obscure > corners of the Internet or sites that would have disappeared recently. > > Thanks for any insights. > > Marcia Yudkin > Introvert UpThink > Introvert UpThink | Marcia Yudkin | Substack > > > > > > Introvert UpThink | Marcia Yudkin | Substack > ? Marcia Yudkin > ? Exploring how introverts are misunderstood, maligned and > underappreciated in our culture - yet still thrive. Cli... > > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members From natederose at gmail.com Fri Mar 10 15:18:09 2023 From: natederose at gmail.com (Nate DeRose) Date: Fri, 10 Mar 2023 10:18:09 -0500 Subject: [Hidden-tech] HP Pagewide Pro 552dw Message-ID: Hey folks, We've got one of these units that was pulled from service and we no longer need. It's in good working order, with an additional paper drawer. It will need one of the 4 inks replaced, but other than that, it's ready to go to a good home if anyone wants it. Free to be picked up in Florence. Thanks, Nate -------------- next part -------------- An HTML attachment was scrubbed... URL: From viet.q.trang at gmail.com Fri Mar 10 17:32:37 2023 From: viet.q.trang at gmail.com (Viet Trang) Date: Fri, 10 Mar 2023 12:32:37 -0500 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> Message-ID: >> ChatGPT is a conversation engine, not a search engine. This is true but just means it cannot provide real time search results to you (unless you're using it from Bing). It does not mean it cannot provide links to pages it has already processed. Though I would be surprised if any of the links work (again, unless you're using Bing's ChatGPT). This is why I suggest searching on other engines. Publisher archives may be helpful too. >> It is designed to provide plausible responses based on similarity of questions and answers to existing material on the internet, without attempting to correlate its responses with actual facts. This isn't true. ChatGPT may make mistakes, but OpenAI cares about the quality of responses and designed ChatGPT with the aim of being as accurate and helpful as possible, to benefit humanity (and to make a more profitable product). It may make something up but it should not if you tell it not to, and it should be honest about how its response was generated if asked. The problem here is that the pages it has linked to unfortunately have been taken down and it may be difficult to find copies. On Thu, Mar 9, 2023 at 9:57?PM Alan Frank via Hidden-discuss < hidden-discuss at lists.hidden-tech.net> wrote: > ChatGPT is a conversation engine, not a search engine. It is designed > to provide plausible responses based on similarity of questions and > answers to existing material on the internet, without attempting to > correlate its responses with actual facts. Pretty much every social > media space I follow has had multiple posts from people pointing out > ridiculous factual errors from ChatGPT. > > --Alan > > > -------- Original Message -------- > Subject: [Hidden-tech] Question about ChatGPT and machine learning > Date: 2023-03-09 15:29 > From: Marcia Yudkin via Hidden-discuss > > To: "Hidden-discuss at lists.hidden-tech.net" > > > This question is for anyone who understands how the machine learning in > ChatGPT works. > > I've been finding ChatGPT useful for summarizing information that is > widely dispersed around the web, such as questions like "what are the > most popular objections to X?" However, the other day for a blog post I > was writing I asked it "What are some sources on the relationship of X > to Y?" It gave me four sources of information, including the article > title, where it was published and who wrote it. > > This looked great, especially since I recognized two of the author names > as authorities on X. However, when I then did a Google search, I could > not track down any of the four articles, either by title, author or > place of publication. I tried both in Google and in Bing. Zilch! > > Could ChatGPT have totally made up these sources? If so, how does that > work? > > I am baffled about the explanation of this. One of the publications > involved was Psychology Today, so we are not talking about obscure > corners of the Internet or sites that would have disappeared recently. > > Thanks for any insights. > > Marcia Yudkin > Introvert UpThink > Introvert UpThink | Marcia Yudkin | Substack > > > > > > Introvert UpThink | Marcia Yudkin | Substack > Marcia Yudkin > Exploring how introverts are misunderstood, maligned and > underappreciated in our culture - yet still thrive. Cli... > > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yudkinyudkin at yahoo.com Sat Mar 11 01:36:26 2023 From: yudkinyudkin at yahoo.com (Marcia Yudkin) Date: Sat, 11 Mar 2023 01:36:26 +0000 (UTC) Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <20230310150355.CAE43B1401C@webabinitio.net> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> Message-ID: <1714938371.1314807.1678498586187@mail.yahoo.com> David, Some great points there.? I especially like this one: >>it is *ALL* made up.<< That helps me to dimly understand that everything the chat says is simply plausible, no more than that. Maybe we should think of it as no more authoritative than the cocktail party chatter of someone who reads indiscriminately and can't shut up until they've spewed five paragraphs. Marcia On Friday, March 10, 2023 at 12:34:46 PM HST, R. David Murray via Hidden-discuss wrote: >From what I understand (admittedly from only a *basic* understanding of machine learning), it is not so much that ChatGPT is "making errors", but rather that it is "making stuff up", and does not admit that it is making stuff up. I'm going to brain dump what I think here, but I'm not an expert in this by any stretch, so don't take me as an authority.? Perhaps this can help you reason about ChartGPT until you find a better expert to consult ;) One thing to understand is that this is a *trained* model.? That means that it was given a set of questions and answers and told "these are good, these are bad", probably with a rating of *how* good or bad.? Then it was given a lot of other data (and how exactly this gets turned into questions and answers is *way* beyond my knowledge level).? Then a team of model trainers started asking questions.? The trainers would look at the answers it came up with and rate them, thus adding to the "trained" data set.? When you tell ChatGPT that its answer was good or bad, you are also potentially adding to that training data, by the way. I'm guessing that the way the system works there is actually no way for it to "know" that it has made something up.? The output that it produces is generated based on what you can think of as a very advanced version of statistical language modelling:? given a certain input, what are the most likely kinds of things that would follow as a response?? And like any statistical model, when you get enough standard deviations out, things get weird.? At no point in the model output are things tagged as "made up" or "not made up":? it is *ALL* made up. In the middle of the bell curve the made up things are *much* more likely to be "correct" than out at the edges of the bell curve.? But oh those edges... It is of course more sophisticated than a statistical model, but the same principle applies:? if there are few examples of *exactly* the kind of data your input contains, then it is going to draw from stuff that is a lot less closely related to your input for its response.? But, and here is the important part, it is going to make up *something* to answer with.? If a source is mentioned multiple times in the context of your input, it will use it.? If there are no sources mentioned in the context of your input, it will generate an output that looks like the *kind of thing* that would be a response to that *kind of input*.? In this case that included a list of articles.? It generated at least one of them from an author whose name was probably mentioned in the context of your input, but never with an actual article name attached.? Or maybe that author was mentioned in the context of conversations containing a subset of the *words* in your input (rather than logically formed sentences), depending on just how fuzzy the match was.? Then it effectively made up a plausible sounding article name to go with the author name, because that's what responses to other similar questions in its training data looked like (not similar in content, but similar in *form*). So while I agree that making up all the sources seems like an extreme example of this, ChatGPT is what Science Fiction calls an "Artificial Stupid" (something that can't actually *reason*), and thus I think my explanation is plausible.? It just depends on how fuzzy the match was that it made on the input.? If the match was very fuzzy, then it would have come back with material from its data that generally followed at least some of your input, and then since responses the trainers considered "good" to questions like that usually included some sources, it made some up based on how the answers to other, less related, questions looked. Anyone want to bet that four sources was the average number that was accepted as "a good answer" by the people who did the training?? I know I've seen "four things" in a couple of ChatGPT answers, and I haven't asked it very many questions :) Given all this, there are only two things you can do, one of which is exactly what you did: ask it for the sources.? Given *that* input, it should be able to come up with the most likely response being the actual source.? If it can't, then it has probably made up the source (note: I have not tested this technique myself, but it follows logically from how I think the system works). The second thing you can do (which you probably also already did) is to rephrase your input, giving it different amounts and kinds of context, and see how the output changes.? If your altered input results in a less fuzzy match, you will get better answers. The big takeaway, which you clearly already know, is to never trust anything ChatGPT produces.? Use it as a rough draft, but verify all the facts. My fear is that there are going to be a lot of people who aren't as diligent, and we'll end up with a lot of made up information out on the web adding to all of the maliciously bad information that is already out there.? I have read that the ChatGPT researchers are worried about how to avoid using ChatGPT's output as input to a later ChatGPT model, and I have no idea how they are going to achieve that! And keep in mind that that maliciously bad information *is part of ChatGPT's data set*.? Some of it the people who did the training will have caught, but I'm willing to bet they missed a lot of it because *they* didn't know it was bad, or it never came up during training. --David On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss wrote: > Yes, I know that people have been pointing out "ridiculous factual errors" from ChatGPT.? ?However, to make up sources that sound completely plausible but are fake seems like it belongs in a whole other category. > > > > > > > On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank wrote: > > > > > > ChatGPT is a conversation engine, not a search engine.? It is designed > to provide plausible responses based on similarity of questions and > answers to existing material on the internet, without attempting to > correlate its responses with actual facts.? Pretty much every social > media space I follow has had multiple posts from people pointing out > ridiculous factual errors from ChatGPT. > > --Alan > > > -------- Original Message -------- > Subject: [Hidden-tech] Question about ChatGPT and machine learning > Date: 2023-03-09 15:29 > From: Marcia Yudkin via Hidden-discuss > > To: "Hidden-discuss at lists.hidden-tech.net" > > > This question is for anyone who understands how the machine learning in > ChatGPT works. > > I've been finding ChatGPT useful for summarizing information that is > widely dispersed around the web, such as questions like "what are the > most popular objections to X?"? However, the other day for a blog post I > was writing I asked it "What are some sources on the relationship of X > to Y?"? It gave me four sources of information, including the article > title, where it was published and who wrote it.?? > > This looked great, especially since I recognized two of the author names > as authorities on X.? However, when I then did a Google search, I could > not track down any of the four articles, either by title, author or > place of publication.? I tried both in Google and in Bing.? Zilch! > > Could ChatGPT have totally made up these sources?? If so, how does that > work? > > I am baffled about the explanation of this.? One of the publications > involved was Psychology Today, so we are not talking about obscure > corners of the Internet or sites that would have disappeared recently. > > Thanks for any insights. > > Marcia Yudkin > Introvert UpThink > Introvert UpThink | Marcia Yudkin | Substack > > > > > > Introvert UpThink | Marcia Yudkin | Substack > ? Marcia Yudkin > ? Exploring how introverts are misunderstood, maligned and > underappreciated in our culture - yet still thrive. Cli... > > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members From rob at 2disc.com Sat Mar 11 20:18:45 2023 From: rob at 2disc.com (Rob Laporte) Date: Sat, 11 Mar 2023 15:18:45 -0500 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <1714938371.1314807.1678498586187@mail.yahoo.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> Message-ID: <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> I've noticed that humans simplify complex, especially threatening, new things, by using dichotomies of good and evil, red and blue, etc, and that conviction is often inversely proportionate to knowledge. I've worked in search marketing for 28 years, have digested a lot about the kind of tech and processes underlying chatGPT (gpt), and I have no sound basis for strong conviction on any of the issues broached here. But this I can tell you from my mere 6 hours novice use so far: In under an hour it solved a complex HR-financial legal question, and provided the letter the plaintiff should write to the corporation's HR department, with quality of writing, sensitivity to workplace politics, and effective brevity way better than anything I can recall in my 50 years of adulthood, decade teaching college lit and writing, and 3 decades in search marketing. Truly stunning. Save at least $2000 in legal fees that might have gone to a local law firm. A few times over the years I researched best email spam blocking solutions, and considered an aggressive form of it. gpt explained the problem with that solution, and did so way faster than my past and still inclusive Google searches, saving me a few hundred dollars in IT consulting. It completely conflated my semi-famous lawyer grandad's bio with that of his stellar but less accomplished son of the same name. Both are years deceased (most gpt data ends Sept '21), yet totally wrong. So too it got the bio of a decade-deceased famous scholar of Roman architecture (friend's dad) wrong on a few points, most notably that most his career was at Smith college, not Princeton as gpt said.? It produced strikingly eloquent spreadsheet solutions for two different complex purposes. I asked it for the actual spreadsheet, and cell references were off, but in a second half hour of work, I'm sure it wold have gotten it right or I could correct it myself. A few hours of work time saved there, and one of the two tasks was billable.? My firm had a prospective writing project for a client, involving bios of famous and notable painters sold by the client. I say "had" because now gpt or its structuring within services like https://neuroflash.com/?will cut both client and my firm's copywriter time substantially.? I've not tried but viewed a YouTube of good, arguably very good, graphic design for a marketing campaign done in well under half a day. Outside of broad ideological judgements, there's much to consider in how gpt will change work and incomes.? The current version 3.5 will be upgraded to 4.0 within weeks. Think of gpt like the web in 1994 or personal PCs in 1981, with advancements happening 10x faster.? Best Regards, Rob Laporte CEO? |? R&D Manager DISC - Making Web Sites?Make Money mailto:Rob at 2disc.com,?413-584-6500 https://www.2disc.com NOTE:?Emails can be blocked by spam filters throughout the web. If you don?t get a reply within an expected span of time, please call. ---- On Fri, 10 Mar 2023 20:36:26 -0500 Marcia Yudkin via Hidden-discuss wrote --- David, Some great points there.? I especially like this one: >>it is *ALL* made up.<< That helps me to dimly understand that everything the chat says is simply plausible, no more than that. Maybe we should think of it as no more authoritative than the cocktail party chatter of someone who reads indiscriminately and can't shut up until they've spewed five paragraphs. Marcia On Friday, March 10, 2023 at 12:34:46 PM HST, R. David Murray via Hidden-discuss wrote: >From what I understand (admittedly from only a *basic* understanding of machine learning), it is not so much that ChatGPT is "making errors", but rather that it is "making stuff up", and does not admit that it is making stuff up. I'm going to brain dump what I think here, but I'm not an expert in this by any stretch, so don't take me as an authority.? Perhaps this can help you reason about ChartGPT until you find a better expert to consult ;) One thing to understand is that this is a *trained* model.? That means that it was given a set of questions and answers and told "these are good, these are bad", probably with a rating of *how* good or bad.? Then it was given a lot of other data (and how exactly this gets turned into questions and answers is *way* beyond my knowledge level).? Then a team of model trainers started asking questions.? The trainers would look at the answers it came up with and rate them, thus adding to the "trained" data set.? When you tell ChatGPT that its answer was good or bad, you are also potentially adding to that training data, by the way. I'm guessing that the way the system works there is actually no way for it to "know" that it has made something up.? The output that it produces is generated based on what you can think of as a very advanced version of statistical language modelling:? given a certain input, what are the most likely kinds of things that would follow as a response?? And like any statistical model, when you get enough standard deviations out, things get weird.? At no point in the model output are things tagged as "made up" or "not made up":? it is *ALL* made up. In the middle of the bell curve the made up things are *much* more likely to be "correct" than out at the edges of the bell curve.? But oh those edges... It is of course more sophisticated than a statistical model, but the same principle applies:? if there are few examples of *exactly* the kind of data your input contains, then it is going to draw from stuff that is a lot less closely related to your input for its response.? But, and here is the important part, it is going to make up *something* to answer with.? If a source is mentioned multiple times in the context of your input, it will use it.? If there are no sources mentioned in the context of your input, it will generate an output that looks like the *kind of thing* that would be a response to that *kind of input*.? In this case that included a list of articles.? It generated at least one of them from an author whose name was probably mentioned in the context of your input, but never with an actual article name attached.? Or maybe that author was mentioned in the context of conversations containing a subset of the *words* in your input (rather than logically formed sentences), depending on just how fuzzy the match was.? Then it effectively made up a plausible sounding article name to go with the author name, because that's what responses to other similar questions in its training data looked like (not similar in content, but similar in *form*). So while I agree that making up all the sources seems like an extreme example of this, ChatGPT is what Science Fiction calls an "Artificial Stupid" (something that can't actually *reason*), and thus I think my explanation is plausible.? It just depends on how fuzzy the match was that it made on the input.? If the match was very fuzzy, then it would have come back with material from its data that generally followed at least some of your input, and then since responses the trainers considered "good" to questions like that usually included some sources, it made some up based on how the answers to other, less related, questions looked. Anyone want to bet that four sources was the average number that was accepted as "a good answer" by the people who did the training?? I know I've seen "four things" in a couple of ChatGPT answers, and I haven't asked it very many questions :) Given all this, there are only two things you can do, one of which is exactly what you did: ask it for the sources.? Given *that* input, it should be able to come up with the most likely response being the actual source.? If it can't, then it has probably made up the source (note: I have not tested this technique myself, but it follows logically from how I think the system works). The second thing you can do (which you probably also already did) is to rephrase your input, giving it different amounts and kinds of context, and see how the output changes.? If your altered input results in a less fuzzy match, you will get better answers. The big takeaway, which you clearly already know, is to never trust anything ChatGPT produces.? Use it as a rough draft, but verify all the facts. My fear is that there are going to be a lot of people who aren't as diligent, and we'll end up with a lot of made up information out on the web adding to all of the maliciously bad information that is already out there.? I have read that the ChatGPT researchers are worried about how to avoid using ChatGPT's output as input to a later ChatGPT model, and I have no idea how they are going to achieve that! And keep in mind that that maliciously bad information *is part of ChatGPT's data set*.? Some of it the people who did the training will have caught, but I'm willing to bet they missed a lot of it because *they* didn't know it was bad, or it never came up during training. --David On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss wrote: > Yes, I know that people have been pointing out "ridiculous factual errors" from ChatGPT.? ?However, to make up sources that sound completely plausible but are fake seems like it belongs in a whole other category. > > > > > > > On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank wrote: > > > > > > ChatGPT is a conversation engine, not a search engine.? It is designed > to provide plausible responses based on similarity of questions and > answers to existing material on the internet, without attempting to > correlate its responses with actual facts.? Pretty much every social > media space I follow has had multiple posts from people pointing out > ridiculous factual errors from ChatGPT. > > --Alan > > > -------- Original Message -------- > Subject: [Hidden-tech] Question about ChatGPT and machine learning > Date: 2023-03-09 15:29 > From: Marcia Yudkin via Hidden-discuss > > To: "mailto:Hidden-discuss at lists.hidden-tech.net" > > > This question is for anyone who understands how the machine learning in > ChatGPT works. > > I've been finding ChatGPT useful for summarizing information that is > widely dispersed around the web, such as questions like "what are the > most popular objections to X?"? However, the other day for a blog post I > was writing I asked it "What are some sources on the relationship of X > to Y?"? It gave me four sources of information, including the article > title, where it was published and who wrote it.?? > > This looked great, especially since I recognized two of the author names > as authorities on X.? However, when I then did a Google search, I could > not track down any of the four articles, either by title, author or > place of publication.? I tried both in Google and in Bing.? Zilch! > > Could ChatGPT have totally made up these sources?? If so, how does that > work? > > I am baffled about the explanation of this.? One of the publications > involved was Psychology Today, so we are not talking about obscure > corners of the Internet or sites that would have disappeared recently. > > Thanks for any insights. > > Marcia Yudkin > Introvert UpThink > Introvert UpThink | Marcia Yudkin | Substack > > > > > > Introvert UpThink | Marcia Yudkin | Substack > ? Marcia Yudkin > ? Exploring how introverts are misunderstood, maligned and > underappreciated in our culture - yet still thrive. Cli... > > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > mailto:Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > mailto:Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net mailto:Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net mailto:Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members -------------- next part -------------- An HTML attachment was scrubbed... URL: From yudkinyudkin at yahoo.com Sat Mar 11 20:55:35 2023 From: yudkinyudkin at yahoo.com (Marcia Yudkin) Date: Sat, 11 Mar 2023 20:55:35 +0000 (UTC) Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> Message-ID: <697951220.1424386.1678568135796@mail.yahoo.com> Rob, Those are very interesting examples.? It's a mixed track record, though.? Based on your experience, what would you say ChatGPT should and shouldn't be used for, or how it should or shouldn't be used? For example, based on the errors in bios you saw, would you still use it for those artist bios given that you'd have to meticulously fact check everything it wrote? Marcia Yudkin On Saturday, March 11, 2023 at 10:19:20 AM HST, Rob Laporte wrote: I've noticed that humans simplify complex, especially threatening, new things, by using dichotomies of good and evil, red and blue, etc, and that conviction is often inversely proportionate to knowledge. I've worked in search marketing for 28 years, have digested a lot about the kind of tech and processes underlying chatGPT (gpt), and I have no sound basis for strong conviction on any of the issues broached here. But this I can tell you from my mere 6 hours novice use so far: ????* In under an hour it solved a complex HR-financial legal question, and provided the letter the plaintiff should write to the corporation's HR department, with quality of writing, sensitivity to workplace politics, and effective brevity way better than anything I can recall in my 50 years of adulthood, decade teaching college lit and writing, and 3 decades in search marketing. Truly stunning. Save at least $2000 in legal fees that might have gone to a local law firm. ????* A few times over the years I researched best email spam blocking solutions, and considered an aggressive form of it. gpt explained the problem with that solution, and did so way faster than my past and still inclusive Google searches, saving me a few hundred dollars in IT consulting. ????* It completely conflated my semi-famous lawyer grandad's bio with that of his stellar but less accomplished son of the same name. Both are years deceased (most gpt data ends Sept '21), yet totally wrong. ????* So too it got the bio of a decade-deceased famous scholar of Roman architecture (friend's dad) wrong on a few points, most notably that most his career was at Smith college, not Princeton as gpt said.? ????* It produced strikingly eloquent spreadsheet solutions for two different complex purposes. I asked it for the actual spreadsheet, and cell references were off, but in a second half hour of work, I'm sure it wold have gotten it right or I could correct it myself. A few hours of work time saved there, and one of the two tasks was billable.? ????* My firm had a prospective writing project for a client, involving bios of famous and notable painters sold by the client. I say "had" because now gpt or its structuring within services like NeuroFash?will cut both client and my firm's copywriter time substantially.? ????* I've not tried but viewed a YouTube of good, arguably very good, graphic design for a marketing campaign done in well under half a day. Outside of broad ideological judgements, there's much to consider in how gpt will change work and incomes.? The current version 3.5 will be upgraded to 4.0 within weeks. Think of gpt like the web in 1994 or personal PCs in 1981, with advancements happening 10x faster.? Best Regards, Rob Laporte CEO? |? R&D Manager DISC - Making Web Sites?Make Money Rob at 2disc.com,?413-584-6500 www.2disc.com NOTE:?Emails can be blocked by spam filters throughout the web. If you don?t get a reply within an expected span of time, please call. ---- On Fri, 10 Mar 2023 20:36:26 -0500 Marcia Yudkin via Hidden-discuss wrote --- > David, > > Some great points there.? I especially like this one: > >>>it is *ALL* made up.<< > > That helps me to dimly understand that everything the chat says is simply plausible, no more than that. > > Maybe we should think of it as no more authoritative than the cocktail party chatter of someone who reads indiscriminately and can't shut up until they've spewed five paragraphs. > > Marcia > > > > > > > > > On Friday, March 10, 2023 at 12:34:46 PM HST, R. David Murray via Hidden-discuss wrote: > > > > > > From what I understand (admittedly from only a *basic* understanding of > machine learning), it is not so much that ChatGPT is "making errors", > but rather that it is "making stuff up", and does not admit that it is > making stuff up. > > I'm going to brain dump what I think here, but I'm not an expert in this > by any stretch, so don't take me as an authority.? Perhaps this can help > you reason about ChartGPT until you find a better expert to consult ;) > > One thing to understand is that this is a *trained* model.? That means > that it was given a set of questions and answers and told "these are > good, these are bad", probably with a rating of *how* good or bad.? Then > it was given a lot of other data (and how exactly this gets turned into > questions and answers is *way* beyond my knowledge level).? Then a team > of model trainers started asking questions.? The trainers would look at > the answers it came up with and rate them, thus adding to the "trained" > data set.? When you tell ChatGPT that its answer was good or bad, you > are also potentially adding to that training data, by the way. > > I'm guessing that the way the system works there is actually no way for > it to "know" that it has made something up.? The output that it produces > is generated based on what you can think of as a very advanced version > of statistical language modelling:? given a certain input, what are the > most likely kinds of things that would follow as a response?? And like > any statistical model, when you get enough standard deviations out, > things get weird.? At no point in the model output are things tagged as > "made up" or "not made up":? it is *ALL* made up. > > In the middle of the bell curve the made up things are *much* more > likely to be "correct" than out at the edges of the bell curve.? But > oh those edges... > > It is of course more sophisticated than a statistical model, but the > same principle applies:? if there are few examples of *exactly* the kind > of data your input contains, then it is going to draw from stuff that is > a lot less closely related to your input for its response.? But, and > here is the important part, it is going to make up *something* to answer > with.? If a source is mentioned multiple times in the context of your > input, it will use it.? If there are no sources mentioned in the context > of your input, it will generate an output that looks like the *kind of > thing* that would be a response to that *kind of input*.? In this case > that included a list of articles.? It generated at least one of them > from an author whose name was probably mentioned in the context of your > input, but never with an actual article name attached.? Or maybe that > author was mentioned in the context of conversations containing a > subset of the *words* in your input (rather than logically formed > sentences), depending on just how fuzzy the match was.? Then it > effectively made up a plausible sounding article name to go with the > author name, because that's what responses to other similar questions in > its training data looked like (not similar in content, but similar in > *form*). > > So while I agree that making up all the sources seems like an extreme > example of this, ChatGPT is what Science Fiction calls an "Artificial > Stupid" (something that can't actually *reason*), and thus I think my > explanation is plausible.? It just depends on how fuzzy the match was > that it made on the input.? If the match was very fuzzy, then it would > have come back with material from its data that generally followed at > least some of your input, and then since responses the trainers > considered "good" to questions like that usually included some sources, > it made some up based on how the answers to other, less related, > questions looked. > > Anyone want to bet that four sources was the average number that was > accepted as "a good answer" by the people who did the training?? I know > I've seen "four things" in a couple of ChatGPT answers, and I haven't > asked it very many questions :) > > Given all this, there are only two things you can do, one of which is > exactly what you did: ask it for the sources.? Given *that* input, it > should be able to come up with the most likely response being the actual > source.? If it can't, then it has probably made up the source (note: I > have not tested this technique myself, but it follows logically from how > I think the system works). > > The second thing you can do (which you probably also already did) is to > rephrase your input, giving it different amounts and kinds of context, > and see how the output changes.? If your altered input results in a less > fuzzy match, you will get better answers. > > The big takeaway, which you clearly already know, is to never trust > anything ChatGPT produces.? Use it as a rough draft, but verify all the > facts. > > My fear is that there are going to be a lot of people who aren't as > diligent, and we'll end up with a lot of made up information out on the > web adding to all of the maliciously bad information that is already out > there.? I have read that the ChatGPT researchers are worried about how > to avoid using ChatGPT's output as input to a later ChatGPT model, and I > have no idea how they are going to achieve that! > > And keep in mind that that maliciously bad information *is part of > ChatGPT's data set*.? Some of it the people who did the training will have > caught, but I'm willing to bet they missed a lot of it because *they* > didn't know it was bad, or it never came up during training. > > --David > > On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss wrote: >> Yes, I know that people have been pointing out "ridiculous factual errors" from ChatGPT.? ?However, to make up sources that sound completely plausible but are fake seems like it belongs in a whole other category. >> >> >> >> >> >> >> On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank wrote: >> >> >> >> >> >> ChatGPT is a conversation engine, not a search engine.? It is designed >> to provide plausible responses based on similarity of questions and >> answers to existing material on the internet, without attempting to >> correlate its responses with actual facts.? Pretty much every social >> media space I follow has had multiple posts from people pointing out >> ridiculous factual errors from ChatGPT. >> >> --Alan >> >> >> -------- Original Message -------- >> Subject: [Hidden-tech] Question about ChatGPT and machine learning >> Date: 2023-03-09 15:29 >> From: Marcia Yudkin via Hidden-discuss >> >> To: "Hidden-discuss at lists.hidden-tech.net" >> >> >> This question is for anyone who understands how the machine learning in >> ChatGPT works. >> >> I've been finding ChatGPT useful for summarizing information that is >> widely dispersed around the web, such as questions like "what are the >> most popular objections to X?"? However, the other day for a blog post I >> was writing I asked it "What are some sources on the relationship of X >> to Y?"? It gave me four sources of information, including the article >> title, where it was published and who wrote it.?? >> >> This looked great, especially since I recognized two of the author names >> as authorities on X.? However, when I then did a Google search, I could >> not track down any of the four articles, either by title, author or >> place of publication.? I tried both in Google and in Bing.? Zilch! >> >> Could ChatGPT have totally made up these sources?? If so, how does that >> work? >> >> I am baffled about the explanation of this.? One of the publications >> involved was Psychology Today, so we are not talking about obscure >> corners of the Internet or sites that would have disappeared recently. >> >> Thanks for any insights. >> >> Marcia Yudkin >> Introvert UpThink >> Introvert UpThink | Marcia Yudkin | Substack >> >> >> >> >> >> Introvert UpThink | Marcia Yudkin | Substack >> ? Marcia Yudkin >> ? Exploring how introverts are misunderstood, maligned and >> underappreciated in our culture - yet still thrive. Cli... >> >> >> _______________________________________________ >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >> Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion >> list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members >> _______________________________________________ >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >> Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members From shel at principledprofit.com Sun Mar 12 00:49:11 2023 From: shel at principledprofit.com (Shel Horowitz) Date: Sat, 11 Mar 2023 19:49:11 -0500 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <20230310150355.CAE43B1401C@webabinitio.net> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> Message-ID: David: "My fear is that there are going to be a lot of people who aren't as diligent, and we'll end up with a lot of made up information out on the web adding to all of the maliciously bad information that is already out there." I share your worry. Chat GPT sounds so authoritative--or as Marcia called it, plausible--that we have to be careful to know when it's spewing nonsense--sort of like a certain media outlet that repeatedly admitted under oath recently that it is spewing nonsense. Rob: "humans simplify complex, especially threatening, new things, by using dichotomies of good and evil, red and blue, etc, and that conviction is often inversely proportionate to knowledge." An astute observation. But while your mixed-results examples show that the either-or switch doesn't work here, I'm not sure what does. If the point of ChatGPT is to reduce our workload but we have to verify all the results, I don't see it as useful And Rob, you were quick to save money on lawyer and copywriting fees,etc. But since you're an SEO person, how do you react if I tell you I plan to try the chatbot to discover search terms in my narrow niche that people are actually searching for. I just discovered that most of my search terms just aren't getting searches. It's hard to think of a career that isn't at risk. Teachers in my circle are especially upset because homework might be going away. This doesn't mean I'm some kind of Luddite who wants to ban it. But I do raise questions about what it all means for the future of work, careers, creativity, etc. And I do see a need for quality control. I landed on a site last week that was obviously written by a chatbot. I'm not familiar enough with GPT's output to be an expert. But I could tell within the first paragraph--and I left, taking with me any respect I had for the site owner. We will adapt. People who used to make buggy whips in 1890 were doing something else by 1910. Teachers might actually have a new wave of creativity in figuring out assignments that can't be done by a bot. Copywriters will reach into their quiver for the arrows that demonstrate their superiority over the generic writing, just as really good copywriters have always survived competition from cheaper but lesser writers. Shel Horowitz - "The Transformpreneur" ________________________________________________ Contact me to bake in profitability while addressing hunger, poverty, war, and catastrophic climate change * First business ever to be Green America Gold Certified * Inducted into the National Environmental Hall of Fame * Certified speaker: International Platform Association https://goingbeyondsustainability.com mailto:shel at greenandprofitable.com 413-586-2388 Award-winning, best-selling author of 10 books. Latest: Guerrilla Marketing to Heal the World (co-authored with Jay Conrad Levinson) Watch my TEDx Talk, "Impossible is a Dare: Business for a Better World" http://www.ted.com/tedx/events/11809 (move your mouse to "event videos") _________________________________________________ ? On Fri, Mar 10, 2023 at 10:03?AM R. David Murray via Hidden-discuss < hidden-discuss at lists.hidden-tech.net> wrote: > From what I understand (admittedly from only a *basic* understanding of > machine learning), it is not so much that ChatGPT is "making errors", > but rather that it is "making stuff up", and does not admit that it is > making stuff up. > > I'm going to brain dump what I think here, but I'm not an expert in this > by any stretch, so don't take me as an authority. Perhaps this can help > you reason about ChartGPT until you find a better expert to consult ;) > > One thing to understand is that this is a *trained* model. That means > that it was given a set of questions and answers and told "these are > good, these are bad", probably with a rating of *how* good or bad. Then > it was given a lot of other data (and how exactly this gets turned into > questions and answers is *way* beyond my knowledge level). Then a team > of model trainers started asking questions. The trainers would look at > the answers it came up with and rate them, thus adding to the "trained" > data set. When you tell ChatGPT that its answer was good or bad, you > are also potentially adding to that training data, by the way. > > I'm guessing that the way the system works there is actually no way for > it to "know" that it has made something up. The output that it produces > is generated based on what you can think of as a very advanced version > of statistical language modelling: given a certain input, what are the > most likely kinds of things that would follow as a response? And like > any statistical model, when you get enough standard deviations out, > things get weird. At no point in the model output are things tagged as > "made up" or "not made up": it is *ALL* made up. > > In the middle of the bell curve the made up things are *much* more > likely to be "correct" than out at the edges of the bell curve. But > oh those edges... > > It is of course more sophisticated than a statistical model, but the > same principle applies: if there are few examples of *exactly* the kind > of data your input contains, then it is going to draw from stuff that is > a lot less closely related to your input for its response. But, and > here is the important part, it is going to make up *something* to answer > with. If a source is mentioned multiple times in the context of your > input, it will use it. If there are no sources mentioned in the context > of your input, it will generate an output that looks like the *kind of > thing* that would be a response to that *kind of input*. In this case > that included a list of articles. It generated at least one of them > from an author whose name was probably mentioned in the context of your > input, but never with an actual article name attached. Or maybe that > author was mentioned in the context of conversations containing a > subset of the *words* in your input (rather than logically formed > sentences), depending on just how fuzzy the match was. Then it > effectively made up a plausible sounding article name to go with the > author name, because that's what responses to other similar questions in > its training data looked like (not similar in content, but similar in > *form*). > > So while I agree that making up all the sources seems like an extreme > example of this, ChatGPT is what Science Fiction calls an "Artificial > Stupid" (something that can't actually *reason*), and thus I think my > explanation is plausible. It just depends on how fuzzy the match was > that it made on the input. If the match was very fuzzy, then it would > have come back with material from its data that generally followed at > least some of your input, and then since responses the trainers > considered "good" to questions like that usually included some sources, > it made some up based on how the answers to other, less related, > questions looked. > > Anyone want to bet that four sources was the average number that was > accepted as "a good answer" by the people who did the training? I know > I've seen "four things" in a couple of ChatGPT answers, and I haven't > asked it very many questions :) > > Given all this, there are only two things you can do, one of which is > exactly what you did: ask it for the sources. Given *that* input, it > should be able to come up with the most likely response being the actual > source. If it can't, then it has probably made up the source (note: I > have not tested this technique myself, but it follows logically from how > I think the system works). > > The second thing you can do (which you probably also already did) is to > rephrase your input, giving it different amounts and kinds of context, > and see how the output changes. If your altered input results in a less > fuzzy match, you will get better answers. > > The big takeaway, which you clearly already know, is to never trust > anything ChatGPT produces. Use it as a rough draft, but verify all the > facts. > > My fear is that there are going to be a lot of people who aren't as > diligent, and we'll end up with a lot of made up information out on the > web adding to all of the maliciously bad information that is already out > there. I have read that the ChatGPT researchers are worried about how > to avoid using ChatGPT's output as input to a later ChatGPT model, and I > have no idea how they are going to achieve that! > > And keep in mind that that maliciously bad information *is part of > ChatGPT's data set*. Some of it the people who did the training will have > caught, but I'm willing to bet they missed a lot of it because *they* > didn't know it was bad, or it never came up during training. > > --David > > On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss < > hidden-discuss at lists.hidden-tech.net> wrote: > > Yes, I know that people have been pointing out "ridiculous factual > errors" from ChatGPT. However, to make up sources that sound completely > plausible but are fake seems like it belongs in a whole other category. > > > > > > > > > > > > > > On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank < > alan at 8wheels.org> wrote: > > > > > > > > > > > > ChatGPT is a conversation engine, not a search engine. It is designed > > to provide plausible responses based on similarity of questions and > > answers to existing material on the internet, without attempting to > > correlate its responses with actual facts. Pretty much every social > > media space I follow has had multiple posts from people pointing out > > ridiculous factual errors from ChatGPT. > > > > --Alan > > > > > > -------- Original Message -------- > > Subject: [Hidden-tech] Question about ChatGPT and machine learning > > Date: 2023-03-09 15:29 > > From: Marcia Yudkin via Hidden-discuss > > > > To: "Hidden-discuss at lists.hidden-tech.net" > > > > > > This question is for anyone who understands how the machine learning in > > ChatGPT works. > > > > I've been finding ChatGPT useful for summarizing information that is > > widely dispersed around the web, such as questions like "what are the > > most popular objections to X?" However, the other day for a blog post I > > was writing I asked it "What are some sources on the relationship of X > > to Y?" It gave me four sources of information, including the article > > title, where it was published and who wrote it. > > > > This looked great, especially since I recognized two of the author names > > as authorities on X. However, when I then did a Google search, I could > > not track down any of the four articles, either by title, author or > > place of publication. I tried both in Google and in Bing. Zilch! > > > > Could ChatGPT have totally made up these sources? If so, how does that > > work? > > > > I am baffled about the explanation of this. One of the publications > > involved was Psychology Today, so we are not talking about obscure > > corners of the Internet or sites that would have disappeared recently. > > > > Thanks for any insights. > > > > Marcia Yudkin > > Introvert UpThink > > Introvert UpThink | Marcia Yudkin | Substack > > > > > > > > > > > > Introvert UpThink | Marcia Yudkin | Substack > > Marcia Yudkin > > Exploring how introverts are misunderstood, maligned and > > underappreciated in our culture - yet still thrive. Cli... > > > > > > _______________________________________________ > > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > > Hidden-discuss at lists.hidden-tech.net > > > > You are receiving this because you are on the Hidden-Tech Discussion > > list. > > If you would like to change your list preferences, Go to the Members > > page on the Hidden Tech Web site. > > http://www.hidden-tech.net/members > > _______________________________________________ > > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > > Hidden-discuss at lists.hidden-tech.net > > > > You are receiving this because you are on the Hidden-Tech Discussion > list. > > If you would like to change your list preferences, Go to the Members > > page on the Hidden Tech Web site. > > http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan at 8wheels.org Sun Mar 12 02:22:18 2023 From: alan at 8wheels.org (Alan Frank) Date: Sat, 11 Mar 2023 21:22:18 -0500 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <697951220.1424386.1678568135796@mail.yahoo.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> Message-ID: I would use it for composing text, but not for facts at all. And if I asked it "How would you introduce George Takei at a political convention"," I would expect well-written text, but would also fact-check everything. -------- Original Message -------- Subject: Re: [Hidden-tech] Question about ChatGPT and machine learning Date: 2023-03-11 15:55 From: Marcia Yudkin via Hidden-discuss To: Rob Laporte Rob, Those are very interesting examples.? It's a mixed track record, though.? Based on your experience, what would you say ChatGPT should and shouldn't be used for, or how it should or shouldn't be used? For example, based on the errors in bios you saw, would you still use it for those artist bios given that you'd have to meticulously fact check everything it wrote? Marcia Yudkin On Saturday, March 11, 2023 at 10:19:20 AM HST, Rob Laporte wrote: I've noticed that humans simplify complex, especially threatening, new things, by using dichotomies of good and evil, red and blue, etc, and that conviction is often inversely proportionate to knowledge. I've worked in search marketing for 28 years, have digested a lot about the kind of tech and processes underlying chatGPT (gpt), and I have no sound basis for strong conviction on any of the issues broached here. But this I can tell you from my mere 6 hours novice use so far: ????* In under an hour it solved a complex HR-financial legal question, and provided the letter the plaintiff should write to the corporation's HR department, with quality of writing, sensitivity to workplace politics, and effective brevity way better than anything I can recall in my 50 years of adulthood, decade teaching college lit and writing, and 3 decades in search marketing. Truly stunning. Save at least $2000 in legal fees that might have gone to a local law firm. ????* A few times over the years I researched best email spam blocking solutions, and considered an aggressive form of it. gpt explained the problem with that solution, and did so way faster than my past and still inclusive Google searches, saving me a few hundred dollars in IT consulting. ????* It completely conflated my semi-famous lawyer grandad's bio with that of his stellar but less accomplished son of the same name. Both are years deceased (most gpt data ends Sept '21), yet totally wrong. ????* So too it got the bio of a decade-deceased famous scholar of Roman architecture (friend's dad) wrong on a few points, most notably that most his career was at Smith college, not Princeton as gpt said.? ????* It produced strikingly eloquent spreadsheet solutions for two different complex purposes. I asked it for the actual spreadsheet, and cell references were off, but in a second half hour of work, I'm sure it wold have gotten it right or I could correct it myself. A few hours of work time saved there, and one of the two tasks was billable.? ????* My firm had a prospective writing project for a client, involving bios of famous and notable painters sold by the client. I say "had" because now gpt or its structuring within services like NeuroFash?will cut both client and my firm's copywriter time substantially.? ????* I've not tried but viewed a YouTube of good, arguably very good, graphic design for a marketing campaign done in well under half a day. Outside of broad ideological judgements, there's much to consider in how gpt will change work and incomes.? The current version 3.5 will be upgraded to 4.0 within weeks. Think of gpt like the web in 1994 or personal PCs in 1981, with advancements happening 10x faster.? Best Regards, Rob Laporte CEO? |? R&D Manager DISC - Making Web Sites?Make Money Rob at 2disc.com,?413-584-6500 www.2disc.com NOTE:?Emails can be blocked by spam filters throughout the web. If you don?t get a reply within an expected span of time, please call. ---- On Fri, 10 Mar 2023 20:36:26 -0500 Marcia Yudkin via Hidden-discuss wrote --- > David, > > Some great points there.? I especially like this one: > >>> it is *ALL* made up.<< > > That helps me to dimly understand that everything the chat says is > simply plausible, no more than that. > > Maybe we should think of it as no more authoritative than the cocktail > party chatter of someone who reads indiscriminately and can't shut up > until they've spewed five paragraphs. > > Marcia > > > > > > > > > On Friday, March 10, 2023 at 12:34:46 PM HST, R. David Murray via > Hidden-discuss wrote: > > > > > > From what I understand (admittedly from only a *basic* understanding of > machine learning), it is not so much that ChatGPT is "making errors", > but rather that it is "making stuff up", and does not admit that it is > making stuff up. > > I'm going to brain dump what I think here, but I'm not an expert in > this > by any stretch, so don't take me as an authority.? Perhaps this can > help > you reason about ChartGPT until you find a better expert to consult ;) > > One thing to understand is that this is a *trained* model.? That means > that it was given a set of questions and answers and told "these are > good, these are bad", probably with a rating of *how* good or bad.? > Then > it was given a lot of other data (and how exactly this gets turned into > questions and answers is *way* beyond my knowledge level).? Then a team > of model trainers started asking questions.? The trainers would look at > the answers it came up with and rate them, thus adding to the "trained" > data set.? When you tell ChatGPT that its answer was good or bad, you > are also potentially adding to that training data, by the way. > > I'm guessing that the way the system works there is actually no way for > it to "know" that it has made something up.? The output that it > produces > is generated based on what you can think of as a very advanced version > of statistical language modelling:? given a certain input, what are the > most likely kinds of things that would follow as a response?? And like > any statistical model, when you get enough standard deviations out, > things get weird.? At no point in the model output are things tagged as > "made up" or "not made up":? it is *ALL* made up. > > In the middle of the bell curve the made up things are *much* more > likely to be "correct" than out at the edges of the bell curve.? But > oh those edges... > > It is of course more sophisticated than a statistical model, but the > same principle applies:? if there are few examples of *exactly* the > kind > of data your input contains, then it is going to draw from stuff that > is > a lot less closely related to your input for its response.? But, and > here is the important part, it is going to make up *something* to > answer > with.? If a source is mentioned multiple times in the context of your > input, it will use it.? If there are no sources mentioned in the > context > of your input, it will generate an output that looks like the *kind of > thing* that would be a response to that *kind of input*.? In this case > that included a list of articles.? It generated at least one of them > from an author whose name was probably mentioned in the context of your > input, but never with an actual article name attached.? Or maybe that > author was mentioned in the context of conversations containing a > subset of the *words* in your input (rather than logically formed > sentences), depending on just how fuzzy the match was.? Then it > effectively made up a plausible sounding article name to go with the > author name, because that's what responses to other similar questions > in > its training data looked like (not similar in content, but similar in > *form*). > > So while I agree that making up all the sources seems like an extreme > example of this, ChatGPT is what Science Fiction calls an "Artificial > Stupid" (something that can't actually *reason*), and thus I think my > explanation is plausible.? It just depends on how fuzzy the match was > that it made on the input.? If the match was very fuzzy, then it would > have come back with material from its data that generally followed at > least some of your input, and then since responses the trainers > considered "good" to questions like that usually included some sources, > it made some up based on how the answers to other, less related, > questions looked. > > Anyone want to bet that four sources was the average number that was > accepted as "a good answer" by the people who did the training?? I know > I've seen "four things" in a couple of ChatGPT answers, and I haven't > asked it very many questions :) > > Given all this, there are only two things you can do, one of which is > exactly what you did: ask it for the sources.? Given *that* input, it > should be able to come up with the most likely response being the > actual > source.? If it can't, then it has probably made up the source (note: I > have not tested this technique myself, but it follows logically from > how > I think the system works). > > The second thing you can do (which you probably also already did) is to > rephrase your input, giving it different amounts and kinds of context, > and see how the output changes.? If your altered input results in a > less > fuzzy match, you will get better answers. > > The big takeaway, which you clearly already know, is to never trust > anything ChatGPT produces.? Use it as a rough draft, but verify all the > facts. > > My fear is that there are going to be a lot of people who aren't as > diligent, and we'll end up with a lot of made up information out on the > web adding to all of the maliciously bad information that is already > out > there.? I have read that the ChatGPT researchers are worried about how > to avoid using ChatGPT's output as input to a later ChatGPT model, and > I > have no idea how they are going to achieve that! > > And keep in mind that that maliciously bad information *is part of > ChatGPT's data set*.? Some of it the people who did the training will > have > caught, but I'm willing to bet they missed a lot of it because *they* > didn't know it was bad, or it never came up during training. > > --David > > On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss > wrote: >> Yes, I know that people have been pointing out "ridiculous factual >> errors" from ChatGPT.? ?However, to make up sources that sound >> completely plausible but are fake seems like it belongs in a whole >> other category. >> >> >> >> >> >> >> On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank >> wrote: >> >> >> >> >> >> ChatGPT is a conversation engine, not a search engine.? It is designed >> to provide plausible responses based on similarity of questions and >> answers to existing material on the internet, without attempting to >> correlate its responses with actual facts.? Pretty much every social >> media space I follow has had multiple posts from people pointing out >> ridiculous factual errors from ChatGPT. >> >> --Alan >> >> >> -------- Original Message -------- >> Subject: [Hidden-tech] Question about ChatGPT and machine learning >> Date: 2023-03-09 15:29 >> From: Marcia Yudkin via Hidden-discuss >> >> To: "Hidden-discuss at lists.hidden-tech.net" >> >> >> This question is for anyone who understands how the machine learning >> in >> ChatGPT works. >> >> I've been finding ChatGPT useful for summarizing information that is >> widely dispersed around the web, such as questions like "what are the >> most popular objections to X?"? However, the other day for a blog post >> I >> was writing I asked it "What are some sources on the relationship of X >> to Y?"? It gave me four sources of information, including the article >> title, where it was published and who wrote it.?? >> >> This looked great, especially since I recognized two of the author >> names >> as authorities on X.? However, when I then did a Google search, I >> could >> not track down any of the four articles, either by title, author or >> place of publication.? I tried both in Google and in Bing.? Zilch! >> >> Could ChatGPT have totally made up these sources?? If so, how does >> that >> work? >> >> I am baffled about the explanation of this.? One of the publications >> involved was Psychology Today, so we are not talking about obscure >> corners of the Internet or sites that would have disappeared recently. >> >> Thanks for any insights. >> >> Marcia Yudkin >> Introvert UpThink >> Introvert UpThink | Marcia Yudkin | Substack >> >> >> >> >> >> Introvert UpThink | Marcia Yudkin | Substack >> ? Marcia Yudkin >> ? Exploring how introverts are misunderstood, maligned and >> underappreciated in our culture - yet still thrive. Cli... >> >> >> _______________________________________________ >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >> Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion >> list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members >> _______________________________________________ >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >> Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion >> list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members From yudkinyudkin at yahoo.com Sun Mar 12 02:45:45 2023 From: yudkinyudkin at yahoo.com (Marcia Yudkin) Date: Sun, 12 Mar 2023 02:45:45 +0000 (UTC) Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> Message-ID: <1077385726.1475766.1678589145074@mail.yahoo.com> I'm not sure I would call its output "well-written," though,? It's competently written, well-organized and with proper grammar, but stodgy in style and unimaginative, unless you've given it instructions like "write in the style of ____." On Saturday, March 11, 2023 at 04:22:19 PM HST, Alan Frank wrote: I would use it for composing text, but not for facts at all.? And if I asked it "How would you introduce George Takei at a political convention"," I would expect well-written text, but would also fact-check everything. -------- Original Message -------- Subject: Re: [Hidden-tech] Question about ChatGPT and machine learning Date: 2023-03-11 15:55 From: Marcia Yudkin via Hidden-discuss To: Rob Laporte Rob, Those are very interesting examples.? It's a mixed track record, though.? Based on your experience, what would you say ChatGPT should and shouldn't be used for, or how it should or shouldn't be used? For example, based on the errors in bios you saw, would you still use it for those artist bios given that you'd have to meticulously fact check everything it wrote? Marcia Yudkin On Saturday, March 11, 2023 at 10:19:20 AM HST, Rob Laporte wrote: I've noticed that humans simplify complex, especially threatening, new things, by using dichotomies of good and evil, red and blue, etc, and that conviction is often inversely proportionate to knowledge. I've worked in search marketing for 28 years, have digested a lot about the kind of tech and processes underlying chatGPT (gpt), and I have no sound basis for strong conviction on any of the issues broached here. But this I can tell you from my mere 6 hours novice use so far: ????* In under an hour it solved a complex HR-financial legal question, and provided the letter the plaintiff should write to the corporation's HR department, with quality of writing, sensitivity to workplace politics, and effective brevity way better than anything I can recall in my 50 years of adulthood, decade teaching college lit and writing, and 3 decades in search marketing. Truly stunning. Save at least $2000 in legal fees that might have gone to a local law firm. ????* A few times over the years I researched best email spam blocking solutions, and considered an aggressive form of it. gpt explained the problem with that solution, and did so way faster than my past and still inclusive Google searches, saving me a few hundred dollars in IT consulting. ????* It completely conflated my semi-famous lawyer grandad's bio with that of his stellar but less accomplished son of the same name. Both are years deceased (most gpt data ends Sept '21), yet totally wrong. ????* So too it got the bio of a decade-deceased famous scholar of Roman architecture (friend's dad) wrong on a few points, most notably that most his career was at Smith college, not Princeton as gpt said.? ????* It produced strikingly eloquent spreadsheet solutions for two different complex purposes. I asked it for the actual spreadsheet, and cell references were off, but in a second half hour of work, I'm sure it wold have gotten it right or I could correct it myself. A few hours of work time saved there, and one of the two tasks was billable.? ????* My firm had a prospective writing project for a client, involving bios of famous and notable painters sold by the client. I say "had" because now gpt or its structuring within services like NeuroFash?will cut both client and my firm's copywriter time substantially.? ????* I've not tried but viewed a YouTube of good, arguably very good, graphic design for a marketing campaign done in well under half a day. Outside of broad ideological judgements, there's much to consider in how gpt will change work and incomes.? The current version 3.5 will be upgraded to 4.0 within weeks. Think of gpt like the web in 1994 or personal PCs in 1981, with advancements happening 10x faster.? Best Regards, Rob Laporte CEO? |? R&D Manager DISC - Making Web Sites?Make Money Rob at 2disc.com,?413-584-6500 www.2disc.com NOTE:?Emails can be blocked by spam filters throughout the web. If you don?t get a reply within an expected span of time, please call. ---- On Fri, 10 Mar 2023 20:36:26 -0500 Marcia Yudkin via Hidden-discuss wrote --- > David, > > Some great points there.? I especially like this one: > >>> it is *ALL* made up.<< > > That helps me to dimly understand that everything the chat says is > simply plausible, no more than that. > > Maybe we should think of it as no more authoritative than the cocktail > party chatter of someone who reads indiscriminately and can't shut up > until they've spewed five paragraphs. > > Marcia > > > > > > > > > On Friday, March 10, 2023 at 12:34:46 PM HST, R. David Murray via > Hidden-discuss wrote: > > > > > > From what I understand (admittedly from only a *basic* understanding of > machine learning), it is not so much that ChatGPT is "making errors", > but rather that it is "making stuff up", and does not admit that it is > making stuff up. > > I'm going to brain dump what I think here, but I'm not an expert in > this > by any stretch, so don't take me as an authority.? Perhaps this can > help > you reason about ChartGPT until you find a better expert to consult ;) > > One thing to understand is that this is a *trained* model.? That means > that it was given a set of questions and answers and told "these are > good, these are bad", probably with a rating of *how* good or bad.? > Then > it was given a lot of other data (and how exactly this gets turned into > questions and answers is *way* beyond my knowledge level).? Then a team > of model trainers started asking questions.? The trainers would look at > the answers it came up with and rate them, thus adding to the "trained" > data set.? When you tell ChatGPT that its answer was good or bad, you > are also potentially adding to that training data, by the way. > > I'm guessing that the way the system works there is actually no way for > it to "know" that it has made something up.? The output that it > produces > is generated based on what you can think of as a very advanced version > of statistical language modelling:? given a certain input, what are the > most likely kinds of things that would follow as a response?? And like > any statistical model, when you get enough standard deviations out, > things get weird.? At no point in the model output are things tagged as > "made up" or "not made up":? it is *ALL* made up. > > In the middle of the bell curve the made up things are *much* more > likely to be "correct" than out at the edges of the bell curve.? But > oh those edges... > > It is of course more sophisticated than a statistical model, but the > same principle applies:? if there are few examples of *exactly* the > kind > of data your input contains, then it is going to draw from stuff that > is > a lot less closely related to your input for its response.? But, and > here is the important part, it is going to make up *something* to > answer > with.? If a source is mentioned multiple times in the context of your > input, it will use it.? If there are no sources mentioned in the > context > of your input, it will generate an output that looks like the *kind of > thing* that would be a response to that *kind of input*.? In this case > that included a list of articles.? It generated at least one of them > from an author whose name was probably mentioned in the context of your > input, but never with an actual article name attached.? Or maybe that > author was mentioned in the context of conversations containing a > subset of the *words* in your input (rather than logically formed > sentences), depending on just how fuzzy the match was.? Then it > effectively made up a plausible sounding article name to go with the > author name, because that's what responses to other similar questions > in > its training data looked like (not similar in content, but similar in > *form*). > > So while I agree that making up all the sources seems like an extreme > example of this, ChatGPT is what Science Fiction calls an "Artificial > Stupid" (something that can't actually *reason*), and thus I think my > explanation is plausible.? It just depends on how fuzzy the match was > that it made on the input.? If the match was very fuzzy, then it would > have come back with material from its data that generally followed at > least some of your input, and then since responses the trainers > considered "good" to questions like that usually included some sources, > it made some up based on how the answers to other, less related, > questions looked. > > Anyone want to bet that four sources was the average number that was > accepted as "a good answer" by the people who did the training?? I know > I've seen "four things" in a couple of ChatGPT answers, and I haven't > asked it very many questions :) > > Given all this, there are only two things you can do, one of which is > exactly what you did: ask it for the sources.? Given *that* input, it > should be able to come up with the most likely response being the > actual > source.? If it can't, then it has probably made up the source (note: I > have not tested this technique myself, but it follows logically from > how > I think the system works). > > The second thing you can do (which you probably also already did) is to > rephrase your input, giving it different amounts and kinds of context, > and see how the output changes.? If your altered input results in a > less > fuzzy match, you will get better answers. > > The big takeaway, which you clearly already know, is to never trust > anything ChatGPT produces.? Use it as a rough draft, but verify all the > facts. > > My fear is that there are going to be a lot of people who aren't as > diligent, and we'll end up with a lot of made up information out on the > web adding to all of the maliciously bad information that is already > out > there.? I have read that the ChatGPT researchers are worried about how > to avoid using ChatGPT's output as input to a later ChatGPT model, and > I > have no idea how they are going to achieve that! > > And keep in mind that that maliciously bad information *is part of > ChatGPT's data set*.? Some of it the people who did the training will > have > caught, but I'm willing to bet they missed a lot of it because *they* > didn't know it was bad, or it never came up during training. > > --David > > On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss > wrote: >> Yes, I know that people have been pointing out "ridiculous factual >> errors" from ChatGPT.? ?However, to make up sources that sound >> completely plausible but are fake seems like it belongs in a whole >> other category. >> >> >> >> >> >> >> On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank >> wrote: >> >> >> >> >> >> ChatGPT is a conversation engine, not a search engine.? It is designed >> to provide plausible responses based on similarity of questions and >> answers to existing material on the internet, without attempting to >> correlate its responses with actual facts.? Pretty much every social >> media space I follow has had multiple posts from people pointing out >> ridiculous factual errors from ChatGPT. >> >> --Alan >> >> >> -------- Original Message -------- >> Subject: [Hidden-tech] Question about ChatGPT and machine learning >> Date: 2023-03-09 15:29 >> From: Marcia Yudkin via Hidden-discuss >> >> To: "Hidden-discuss at lists.hidden-tech.net" >> >> >> This question is for anyone who understands how the machine learning >> in >> ChatGPT works. >> >> I've been finding ChatGPT useful for summarizing information that is >> widely dispersed around the web, such as questions like "what are the >> most popular objections to X?"? However, the other day for a blog post >> I >> was writing I asked it "What are some sources on the relationship of X >> to Y?"? It gave me four sources of information, including the article >> title, where it was published and who wrote it.?? >> >> This looked great, especially since I recognized two of the author >> names >> as authorities on X.? However, when I then did a Google search, I >> could >> not track down any of the four articles, either by title, author or >> place of publication.? I tried both in Google and in Bing.? Zilch! >> >> Could ChatGPT have totally made up these sources?? If so, how does >> that >> work? >> >> I am baffled about the explanation of this.? One of the publications >> involved was Psychology Today, so we are not talking about obscure >> corners of the Internet or sites that would have disappeared recently. >> >> Thanks for any insights. >> >> Marcia Yudkin >> Introvert UpThink >> Introvert UpThink | Marcia Yudkin | Substack >> >> >> >> >> >> Introvert UpThink | Marcia Yudkin | Substack >> ? Marcia Yudkin >> ? Exploring how introverts are misunderstood, maligned and >> underappreciated in our culture - yet still thrive. Cli... >> >> >> _______________________________________________ >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >> Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion >> list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members >> _______________________________________________ >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >> Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion >> list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members From shel at principledprofit.com Sun Mar 12 15:57:09 2023 From: shel at principledprofit.com (Shel Horowitz) Date: Sun, 12 Mar 2023 11:57:09 -0400 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <1077385726.1475766.1678589145074@mail.yahoo.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> Message-ID: I agree with Marcia. ChatGPT is to a skilled writer as clip-art is to graphics. It's adequate for basic needs but is no substitute for a designer working specifically to convey your idea. I recently landed on a site that I could tell used a chatbot to write its copy. I was repelled and left quickly--and crossed that consultant off my list of people I'd ever hire, because she didn't disclose that a bot wrote the site. Ugh! Shel Horowitz - "The Transformpreneur" ________________________________________________ Contact me to bake in profitability while addressing hunger, poverty, war, and catastrophic climate change * First business ever to be Green America Gold Certified * Inducted into the National Environmental Hall of Fame * Certified speaker: International Platform Association https://goingbeyondsustainability.com mailto:shel at greenandprofitable.com 413-586-2388 Award-winning, best-selling author of 10 books. Latest: Guerrilla Marketing to Heal the World (co-authored with Jay Conrad Levinson) Watch my TEDx Talk, "Impossible is a Dare: Business for a Better World" http://www.ted.com/tedx/events/11809 (move your mouse to "event videos") _________________________________________________ On Sat, Mar 11, 2023 at 9:45?PM Marcia Yudkin via Hidden-discuss < hidden-discuss at lists.hidden-tech.net> wrote: > I'm not sure I would call its output "well-written," though, It's > competently written, well-organized and with proper grammar, but stodgy in > style and unimaginative, unless you've given it instructions like "write in > the style of ____." > > > > > > > On Saturday, March 11, 2023 at 04:22:19 PM HST, Alan Frank < > alan at 8wheels.org> wrote: > > > > > > I would use it for composing text, but not for facts at all. And if I > asked it "How would you introduce George Takei at a political > convention"," I would expect well-written text, but would also > fact-check everything. > > > -------- Original Message -------- > Subject: Re: [Hidden-tech] Question about ChatGPT and machine learning > Date: 2023-03-11 15:55 > From: Marcia Yudkin via Hidden-discuss > > To: Rob Laporte > > Rob, > > Those are very interesting examples. It's a mixed track record, > though. Based on your experience, what would you say ChatGPT should and > shouldn't be used for, or how it should or shouldn't be used? > > For example, based on the errors in bios you saw, would you still use it > for those artist bios given that you'd have to meticulously fact check > everything it wrote? > > Marcia Yudkin > > > > > > > On Saturday, March 11, 2023 at 10:19:20 AM HST, Rob Laporte > wrote: > > > > > > I've noticed that humans simplify complex, especially threatening, new > things, by using dichotomies of good and evil, red and blue, etc, and > that conviction is often inversely proportionate to knowledge. I've > worked in search marketing for 28 years, have digested a lot about the > kind of tech and processes underlying chatGPT (gpt), and I have no sound > basis for strong conviction on any of the issues broached here. But this > I can tell you from my mere 6 hours novice use so far: > > * In under an hour it solved a complex HR-financial legal question, > and provided the letter the plaintiff should write to the corporation's > HR department, with quality of writing, sensitivity to workplace > politics, and effective brevity way better than anything I can recall in > my 50 years of adulthood, decade teaching college lit and writing, and 3 > decades in search marketing. Truly stunning. Save at least $2000 in > legal fees that might have gone to a local law firm. > * A few times over the years I researched best email spam blocking > solutions, and considered an aggressive form of it. gpt explained the > problem with that solution, and did so way faster than my past and still > inclusive Google searches, saving me a few hundred dollars in IT > consulting. > * It completely conflated my semi-famous lawyer grandad's bio with > that of his stellar but less accomplished son of the same name. Both are > years deceased (most gpt data ends Sept '21), yet totally wrong. > * So too it got the bio of a decade-deceased famous scholar of Roman > architecture (friend's dad) wrong on a few points, most notably that > most his career was at Smith college, not Princeton as gpt said. > * It produced strikingly eloquent spreadsheet solutions for two > different complex purposes. I asked it for the actual spreadsheet, and > cell references were off, but in a second half hour of work, I'm sure it > wold have gotten it right or I could correct it myself. A few hours of > work time saved there, and one of the two tasks was billable. > * My firm had a prospective writing project for a client, involving > bios of famous and notable painters sold by the client. I say "had" > because now gpt or its structuring within services like NeuroFash will > cut both client and my firm's copywriter time substantially. > * I've not tried but viewed a YouTube of good, arguably very good, > graphic design for a marketing campaign done in well under half a day. > Outside of broad ideological judgements, there's much to consider in how > gpt will change work and incomes. > > The current version 3.5 will be upgraded to 4.0 within weeks. Think of > gpt like the web in 1994 or personal PCs in 1981, with advancements > happening 10x faster. > > Best Regards, > > Rob Laporte > CEO | R&D Manager > DISC - Making Web Sites Make Money > Rob at 2disc.com, 413-584-6500 > www.2disc.com > > NOTE: Emails can be blocked by spam filters throughout the web. If you > don?t get a reply within an expected span of time, please call. > > > > ---- On Fri, 10 Mar 2023 20:36:26 -0500 Marcia Yudkin via Hidden-discuss > wrote --- > > > David, > > > > Some great points there. I especially like this one: > > > >>> it is *ALL* made up.<< > > > > That helps me to dimly understand that everything the chat says is > > simply plausible, no more than that. > > > > Maybe we should think of it as no more authoritative than the cocktail > > party chatter of someone who reads indiscriminately and can't shut up > > until they've spewed five paragraphs. > > > > Marcia > > > > > > > > > > > > > > > > > > On Friday, March 10, 2023 at 12:34:46 PM HST, R. David Murray via > > Hidden-discuss wrote: > > > > > > > > > > > > From what I understand (admittedly from only a *basic* understanding of > > machine learning), it is not so much that ChatGPT is "making errors", > > but rather that it is "making stuff up", and does not admit that it is > > making stuff up. > > > > I'm going to brain dump what I think here, but I'm not an expert in > > this > > by any stretch, so don't take me as an authority. Perhaps this can > > help > > you reason about ChartGPT until you find a better expert to consult ;) > > > > One thing to understand is that this is a *trained* model. That means > > that it was given a set of questions and answers and told "these are > > good, these are bad", probably with a rating of *how* good or bad. > > Then > > it was given a lot of other data (and how exactly this gets turned into > > questions and answers is *way* beyond my knowledge level). Then a team > > of model trainers started asking questions. The trainers would look at > > the answers it came up with and rate them, thus adding to the "trained" > > data set. When you tell ChatGPT that its answer was good or bad, you > > are also potentially adding to that training data, by the way. > > > > I'm guessing that the way the system works there is actually no way for > > it to "know" that it has made something up. The output that it > > produces > > is generated based on what you can think of as a very advanced version > > of statistical language modelling: given a certain input, what are the > > most likely kinds of things that would follow as a response? And like > > any statistical model, when you get enough standard deviations out, > > things get weird. At no point in the model output are things tagged as > > "made up" or "not made up": it is *ALL* made up. > > > > In the middle of the bell curve the made up things are *much* more > > likely to be "correct" than out at the edges of the bell curve. But > > oh those edges... > > > > It is of course more sophisticated than a statistical model, but the > > same principle applies: if there are few examples of *exactly* the > > kind > > of data your input contains, then it is going to draw from stuff that > > is > > a lot less closely related to your input for its response. But, and > > here is the important part, it is going to make up *something* to > > answer > > with. If a source is mentioned multiple times in the context of your > > input, it will use it. If there are no sources mentioned in the > > context > > of your input, it will generate an output that looks like the *kind of > > thing* that would be a response to that *kind of input*. In this case > > that included a list of articles. It generated at least one of them > > from an author whose name was probably mentioned in the context of your > > input, but never with an actual article name attached. Or maybe that > > author was mentioned in the context of conversations containing a > > subset of the *words* in your input (rather than logically formed > > sentences), depending on just how fuzzy the match was. Then it > > effectively made up a plausible sounding article name to go with the > > author name, because that's what responses to other similar questions > > in > > its training data looked like (not similar in content, but similar in > > *form*). > > > > So while I agree that making up all the sources seems like an extreme > > example of this, ChatGPT is what Science Fiction calls an "Artificial > > Stupid" (something that can't actually *reason*), and thus I think my > > explanation is plausible. It just depends on how fuzzy the match was > > that it made on the input. If the match was very fuzzy, then it would > > have come back with material from its data that generally followed at > > least some of your input, and then since responses the trainers > > considered "good" to questions like that usually included some sources, > > it made some up based on how the answers to other, less related, > > questions looked. > > > > Anyone want to bet that four sources was the average number that was > > accepted as "a good answer" by the people who did the training? I know > > I've seen "four things" in a couple of ChatGPT answers, and I haven't > > asked it very many questions :) > > > > Given all this, there are only two things you can do, one of which is > > exactly what you did: ask it for the sources. Given *that* input, it > > should be able to come up with the most likely response being the > > actual > > source. If it can't, then it has probably made up the source (note: I > > have not tested this technique myself, but it follows logically from > > how > > I think the system works). > > > > The second thing you can do (which you probably also already did) is to > > rephrase your input, giving it different amounts and kinds of context, > > and see how the output changes. If your altered input results in a > > less > > fuzzy match, you will get better answers. > > > > The big takeaway, which you clearly already know, is to never trust > > anything ChatGPT produces. Use it as a rough draft, but verify all the > > facts. > > > > My fear is that there are going to be a lot of people who aren't as > > diligent, and we'll end up with a lot of made up information out on the > > web adding to all of the maliciously bad information that is already > > out > > there. I have read that the ChatGPT researchers are worried about how > > to avoid using ChatGPT's output as input to a later ChatGPT model, and > > I > > have no idea how they are going to achieve that! > > > > And keep in mind that that maliciously bad information *is part of > > ChatGPT's data set*. Some of it the people who did the training will > > have > > caught, but I'm willing to bet they missed a lot of it because *they* > > didn't know it was bad, or it never came up during training. > > > > --David > > > > On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss > > wrote: > >> Yes, I know that people have been pointing out "ridiculous factual > >> errors" from ChatGPT. However, to make up sources that sound > >> completely plausible but are fake seems like it belongs in a whole > >> other category. > >> > >> > >> > >> > >> > >> > >> On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank > >> wrote: > >> > >> > >> > >> > >> > >> ChatGPT is a conversation engine, not a search engine. It is designed > >> to provide plausible responses based on similarity of questions and > >> answers to existing material on the internet, without attempting to > >> correlate its responses with actual facts. Pretty much every social > >> media space I follow has had multiple posts from people pointing out > >> ridiculous factual errors from ChatGPT. > >> > >> --Alan > >> > >> > >> -------- Original Message -------- > >> Subject: [Hidden-tech] Question about ChatGPT and machine learning > >> Date: 2023-03-09 15:29 > >> From: Marcia Yudkin via Hidden-discuss > >> > >> To: "Hidden-discuss at lists.hidden-tech.net" > >> > >> > >> This question is for anyone who understands how the machine learning > >> in > >> ChatGPT works. > >> > >> I've been finding ChatGPT useful for summarizing information that is > >> widely dispersed around the web, such as questions like "what are the > >> most popular objections to X?" However, the other day for a blog post > >> I > >> was writing I asked it "What are some sources on the relationship of X > >> to Y?" It gave me four sources of information, including the article > >> title, where it was published and who wrote it. > >> > >> This looked great, especially since I recognized two of the author > >> names > >> as authorities on X. However, when I then did a Google search, I > >> could > >> not track down any of the four articles, either by title, author or > >> place of publication. I tried both in Google and in Bing. Zilch! > >> > >> Could ChatGPT have totally made up these sources? If so, how does > >> that > >> work? > >> > >> I am baffled about the explanation of this. One of the publications > >> involved was Psychology Today, so we are not talking about obscure > >> corners of the Internet or sites that would have disappeared recently. > >> > >> Thanks for any insights. > >> > >> Marcia Yudkin > >> Introvert UpThink > >> Introvert UpThink | Marcia Yudkin | Substack > >> > >> > >> > >> > >> > >> Introvert UpThink | Marcia Yudkin | Substack > >> Marcia Yudkin > >> Exploring how introverts are misunderstood, maligned and > >> underappreciated in our culture - yet still thrive. Cli... > >> > >> > >> _______________________________________________ > >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net > >> Hidden-discuss at lists.hidden-tech.net > >> > >> You are receiving this because you are on the Hidden-Tech Discussion > >> list. > >> If you would like to change your list preferences, Go to the Members > >> page on the Hidden Tech Web site. > >> http://www.hidden-tech.net/members > >> _______________________________________________ > >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net > >> Hidden-discuss at lists.hidden-tech.net > >> > >> You are receiving this because you are on the Hidden-Tech Discussion > >> list. > >> If you would like to change your list preferences, Go to the Members > >> page on the Hidden Tech Web site. > >> http://www.hidden-tech.net/members > > _______________________________________________ > > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > > Hidden-discuss at lists.hidden-tech.net > > > > You are receiving this because you are on the Hidden-Tech Discussion > > list. > > If you would like to change your list preferences, Go to the Members > > page on the Hidden Tech Web site. > > http://www.hidden-tech.net/members > > _______________________________________________ > > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > > Hidden-discuss at lists.hidden-tech.net > > > > You are receiving this because you are on the Hidden-Tech Discussion > > list. > > If you would like to change your list preferences, Go to the Members > > page on the Hidden Tech Web site. > > http://www.hidden-tech.net/members > > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich at tnrglobal.com Mon Mar 13 22:14:33 2023 From: rich at tnrglobal.com (Rich@tnr) Date: Mon, 13 Mar 2023 18:14:33 -0400 Subject: [Hidden-tech] looking for Franklin County Techies interested in Digital Economy development Message-ID: NO nothing to do with SVB There is a developing project in collaboration with many of the business development groups you are probably familiar with that will have a few meetings next week - some of you might already be familiar with it. Next sessions: ??? Tues (21st)? - about building a tech culture and tech talent ??? Weds (22nd) - building business infrastructure with funding for Tech business I am awaiting more info about the scale and location of the meetings. So if you are interested - respond to me offline Let me know -- Rich Roth CEO TnR Global Bio and personal blog: http://rizbang.com Building the really big sites: http://www.tnrglobal.com Small/Soho business in the PV: http://www.hidden-tech.net Places to meet for business: http://www.meetmewhere.com And for Arts and relaxation: http://TarotMuertos.com - Artistic Tarot Deck http://www.welovemuseums.com http://www.artonmytv.com/ Shakers: http://www.shakerpedia.com/ Helping move the world: http://www.earththrives.com -------------- next part -------------- A non-text attachment was scrubbed... Name: rich.vcf Type: text/vcard Size: 4 bytes Desc: not available URL: From rob at 2disc.com Tue Mar 14 14:38:44 2023 From: rob at 2disc.com (Rob Laporte) Date: Tue, 14 Mar 2023 10:38:44 -0400 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> Message-ID: <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> Hi All, Good discussion I wish I had more time for, but a few replies to various points made here: Shell summarizes well: ?But I do raise questions about what it all means for the future of work, careers, creativity, etc?. Also true, ?a new wave of creativity? can happen. Chatbot-written platforms long preceded gpt, so one can find vastly inferior website content now from those now instantly antiquated platforms.? Fact-checking and editing must happen, but that?s less time than composing all, I think. I?ve not tested gpt?s writing extensively, but that legal letter I mentioned was truly consummate, and will be used without a single word changed. I have noticed redundancy, though usually resulting from how I asked a question.? Note that one way gpt and the like is and will be trained to improve is via sophisticated AI-human systems by which to detect chatbot text, making the writing output constantly better. Regarding SEO, a ton to say, but for now: gpt expedites all kinds of SEO work.? My firm has long prepared for AI, esp last 2 years:? It?s about ?Things not stings? (entities or concepts, not text strings). But yes, keyword research, a model still prevailing in most software that can't change cost-effectively now, will be expedited by gpt. My firm is currently researching to what extent old-fashioned strings correlate to the replacement: Entities within the https://searchengineland.com/google-mum-update-seo-future-383551 revolution. Web marketing pros must coordinate marketing channels efficiently; AI is not good at dovetailing separate silos of knowledge, like SEO, PPC, client profit logic, etc, and it's terrible at prioritizing (within a firm's) context. Hence Shell?s ?creativity.? My published prediction 10 years ago that by now AI would relieve SEO pros of lots of routine tech work (via Google Search Console) has been proven way too optimistic--we?re still far from that fine day unfortunately.? Extrapolating from the SEO remarks, I believe professionals in most fields will have to stay abreast of adjacent fields of knowledge, so as to be able to efficiently coordinate riders of separate AI horses. Broadly contextual solutions outside single domains of knowledge is something I predict AI will remain deficient in for several years. This goes to Shell?s ?new wave of creativity.? Best Regards, Rob Laporte CEO? |? R&D Manager DISC - Making Web Sites?Make Money mailto:Rob at 2disc.com,?413-584-6500 https://www.2disc.com NOTE:?Emails can be blocked by spam filters throughout the web. If you don?t get a reply within an expected span of time, please call. ---- On Sun, 12 Mar 2023 11:57:09 -0400 Shel Horowitz via Hidden-discuss wrote --- I agree with Marcia. ChatGPT is to a skilled writer as clip-art is to graphics. It's adequate for basic needs but is no substitute for a designer working specifically to convey your idea. I recently landed on a site that I could tell used a chatbot to write its copy. I was repelled and left quickly--and crossed that consultant off my list of people I'd ever hire, because she didn't disclose that a bot wrote the site. Ugh! Shel Horowitz - "The Transformpreneur" ________________________________________________ Contact me to bake in profitability while addressing hunger,? poverty, war, and catastrophic climate change * First business ever to be Green America Gold Certified * Inducted into the National Environmental Hall of Fame * Certified speaker: International Platform Association https://goingbeyondsustainability.com? mailto:mailto:shel at greenandprofitable.com 413-586-2388 Award-winning, best-selling author of 10 books.? Latest: Guerrilla Marketing to Heal the World? (co-authored with Jay Conrad Levinson) Watch my TEDx Talk, "Impossible is a Dare: Business for a Better World" http://www.ted.com/tedx/events/11809? (move your mouse to "event videos") _________________________________________________ On Sat, Mar 11, 2023 at 9:45?PM Marcia Yudkin via Hidden-discuss wrote: _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net mailto:Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members I'm not sure I would call its output "well-written," though,? It's competently written, well-organized and with proper grammar, but stodgy in style and unimaginative, unless you've given it instructions like "write in the style of ____." On Saturday, March 11, 2023 at 04:22:19 PM HST, Alan Frank wrote: I would use it for composing text, but not for facts at all.? And if I asked it "How would you introduce George Takei at a political convention"," I would expect well-written text, but would also fact-check everything. -------- Original Message -------- Subject: Re: [Hidden-tech] Question about ChatGPT and machine learning Date: 2023-03-11 15:55 From: Marcia Yudkin via Hidden-discuss To: Rob Laporte Rob, Those are very interesting examples.? It's a mixed track record, though.? Based on your experience, what would you say ChatGPT should and shouldn't be used for, or how it should or shouldn't be used? For example, based on the errors in bios you saw, would you still use it for those artist bios given that you'd have to meticulously fact check everything it wrote? Marcia Yudkin On Saturday, March 11, 2023 at 10:19:20 AM HST, Rob Laporte wrote: I've noticed that humans simplify complex, especially threatening, new things, by using dichotomies of good and evil, red and blue, etc, and that conviction is often inversely proportionate to knowledge. I've worked in search marketing for 28 years, have digested a lot about the kind of tech and processes underlying chatGPT (gpt), and I have no sound basis for strong conviction on any of the issues broached here. But this I can tell you from my mere 6 hours novice use so far: ????* In under an hour it solved a complex HR-financial legal question, and provided the letter the plaintiff should write to the corporation's HR department, with quality of writing, sensitivity to workplace politics, and effective brevity way better than anything I can recall in my 50 years of adulthood, decade teaching college lit and writing, and 3 decades in search marketing. Truly stunning. Save at least $2000 in legal fees that might have gone to a local law firm. ????* A few times over the years I researched best email spam blocking solutions, and considered an aggressive form of it. gpt explained the problem with that solution, and did so way faster than my past and still inclusive Google searches, saving me a few hundred dollars in IT consulting. ????* It completely conflated my semi-famous lawyer grandad's bio with that of his stellar but less accomplished son of the same name. Both are years deceased (most gpt data ends Sept '21), yet totally wrong. ????* So too it got the bio of a decade-deceased famous scholar of Roman architecture (friend's dad) wrong on a few points, most notably that most his career was at Smith college, not Princeton as gpt said.? ????* It produced strikingly eloquent spreadsheet solutions for two different complex purposes. I asked it for the actual spreadsheet, and cell references were off, but in a second half hour of work, I'm sure it wold have gotten it right or I could correct it myself. A few hours of work time saved there, and one of the two tasks was billable.? ????* My firm had a prospective writing project for a client, involving bios of famous and notable painters sold by the client. I say "had" because now gpt or its structuring within services like NeuroFash?will cut both client and my firm's copywriter time substantially.? ????* I've not tried but viewed a YouTube of good, arguably very good, graphic design for a marketing campaign done in well under half a day. Outside of broad ideological judgements, there's much to consider in how gpt will change work and incomes.? The current version 3.5 will be upgraded to 4.0 within weeks. Think of gpt like the web in 1994 or personal PCs in 1981, with advancements happening 10x faster.? Best Regards, Rob Laporte CEO? |? R&D Manager DISC - Making Web Sites?Make Money mailto:Rob at 2disc.com,?413-584-6500 http://www.2disc.com NOTE:?Emails can be blocked by spam filters throughout the web. If you don?t get a reply within an expected span of time, please call. ---- On Fri, 10 Mar 2023 20:36:26 -0500 Marcia Yudkin via Hidden-discuss wrote --- > David, > > Some great points there.? I especially like this one: > >>> it is *ALL* made up.<< > > That helps me to dimly understand that everything the chat says is > simply plausible, no more than that. > > Maybe we should think of it as no more authoritative than the cocktail > party chatter of someone who reads indiscriminately and can't shut up > until they've spewed five paragraphs. > > Marcia > > > > > > > > > On Friday, March 10, 2023 at 12:34:46 PM HST, R. David Murray via > Hidden-discuss wrote: > > > > > > From what I understand (admittedly from only a *basic* understanding of > machine learning), it is not so much that ChatGPT is "making errors", > but rather that it is "making stuff up", and does not admit that it is > making stuff up. > > I'm going to brain dump what I think here, but I'm not an expert in > this > by any stretch, so don't take me as an authority.? Perhaps this can > help > you reason about ChartGPT until you find a better expert to consult ;) > > One thing to understand is that this is a *trained* model.? That means > that it was given a set of questions and answers and told "these are > good, these are bad", probably with a rating of *how* good or bad.? > Then > it was given a lot of other data (and how exactly this gets turned into > questions and answers is *way* beyond my knowledge level).? Then a team > of model trainers started asking questions.? The trainers would look at > the answers it came up with and rate them, thus adding to the "trained" > data set.? When you tell ChatGPT that its answer was good or bad, you > are also potentially adding to that training data, by the way. > > I'm guessing that the way the system works there is actually no way for > it to "know" that it has made something up.? The output that it > produces > is generated based on what you can think of as a very advanced version > of statistical language modelling:? given a certain input, what are the > most likely kinds of things that would follow as a response?? And like > any statistical model, when you get enough standard deviations out, > things get weird.? At no point in the model output are things tagged as > "made up" or "not made up":? it is *ALL* made up. > > In the middle of the bell curve the made up things are *much* more > likely to be "correct" than out at the edges of the bell curve.? But > oh those edges... > > It is of course more sophisticated than a statistical model, but the > same principle applies:? if there are few examples of *exactly* the > kind > of data your input contains, then it is going to draw from stuff that > is > a lot less closely related to your input for its response.? But, and > here is the important part, it is going to make up *something* to > answer > with.? If a source is mentioned multiple times in the context of your > input, it will use it.? If there are no sources mentioned in the > context > of your input, it will generate an output that looks like the *kind of > thing* that would be a response to that *kind of input*.? In this case > that included a list of articles.? It generated at least one of them > from an author whose name was probably mentioned in the context of your > input, but never with an actual article name attached.? Or maybe that > author was mentioned in the context of conversations containing a > subset of the *words* in your input (rather than logically formed > sentences), depending on just how fuzzy the match was.? Then it > effectively made up a plausible sounding article name to go with the > author name, because that's what responses to other similar questions > in > its training data looked like (not similar in content, but similar in > *form*). > > So while I agree that making up all the sources seems like an extreme > example of this, ChatGPT is what Science Fiction calls an "Artificial > Stupid" (something that can't actually *reason*), and thus I think my > explanation is plausible.? It just depends on how fuzzy the match was > that it made on the input.? If the match was very fuzzy, then it would > have come back with material from its data that generally followed at > least some of your input, and then since responses the trainers > considered "good" to questions like that usually included some sources, > it made some up based on how the answers to other, less related, > questions looked. > > Anyone want to bet that four sources was the average number that was > accepted as "a good answer" by the people who did the training?? I know > I've seen "four things" in a couple of ChatGPT answers, and I haven't > asked it very many questions :) > > Given all this, there are only two things you can do, one of which is > exactly what you did: ask it for the sources.? Given *that* input, it > should be able to come up with the most likely response being the > actual > source.? If it can't, then it has probably made up the source (note: I > have not tested this technique myself, but it follows logically from > how > I think the system works). > > The second thing you can do (which you probably also already did) is to > rephrase your input, giving it different amounts and kinds of context, > and see how the output changes.? If your altered input results in a > less > fuzzy match, you will get better answers. > > The big takeaway, which you clearly already know, is to never trust > anything ChatGPT produces.? Use it as a rough draft, but verify all the > facts. > > My fear is that there are going to be a lot of people who aren't as > diligent, and we'll end up with a lot of made up information out on the > web adding to all of the maliciously bad information that is already > out > there.? I have read that the ChatGPT researchers are worried about how > to avoid using ChatGPT's output as input to a later ChatGPT model, and > I > have no idea how they are going to achieve that! > > And keep in mind that that maliciously bad information *is part of > ChatGPT's data set*.? Some of it the people who did the training will > have > caught, but I'm willing to bet they missed a lot of it because *they* > didn't know it was bad, or it never came up during training. > > --David > > On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss > wrote: >> Yes, I know that people have been pointing out "ridiculous factual >> errors" from ChatGPT.? ?However, to make up sources that sound >> completely plausible but are fake seems like it belongs in a whole >> other category. >> >> >> >> >> >> >> On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank >> wrote: >> >> >> >> >> >> ChatGPT is a conversation engine, not a search engine.? It is designed >> to provide plausible responses based on similarity of questions and >> answers to existing material on the internet, without attempting to >> correlate its responses with actual facts.? Pretty much every social >> media space I follow has had multiple posts from people pointing out >> ridiculous factual errors from ChatGPT. >> >> --Alan >> >> >> -------- Original Message -------- >> Subject: [Hidden-tech] Question about ChatGPT and machine learning >> Date: 2023-03-09 15:29 >> From: Marcia Yudkin via Hidden-discuss >> >> To: "mailto:Hidden-discuss at lists.hidden-tech.net" >> >> >> This question is for anyone who understands how the machine learning >> in >> ChatGPT works. >> >> I've been finding ChatGPT useful for summarizing information that is >> widely dispersed around the web, such as questions like "what are the >> most popular objections to X?"? However, the other day for a blog post >> I >> was writing I asked it "What are some sources on the relationship of X >> to Y?"? It gave me four sources of information, including the article >> title, where it was published and who wrote it.?? >> >> This looked great, especially since I recognized two of the author >> names >> as authorities on X.? However, when I then did a Google search, I >> could >> not track down any of the four articles, either by title, author or >> place of publication.? I tried both in Google and in Bing.? Zilch! >> >> Could ChatGPT have totally made up these sources?? If so, how does >> that >> work? >> >> I am baffled about the explanation of this.? One of the publications >> involved was Psychology Today, so we are not talking about obscure >> corners of the Internet or sites that would have disappeared recently. >> >> Thanks for any insights. >> >> Marcia Yudkin >> Introvert UpThink >> Introvert UpThink | Marcia Yudkin | Substack >> >> >> >> >> >> Introvert UpThink | Marcia Yudkin | Substack >> ? Marcia Yudkin >> ? Exploring how introverts are misunderstood, maligned and >> underappreciated in our culture - yet still thrive. Cli... >> >> >> _______________________________________________ >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >> mailto:Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion >> list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members >> _______________________________________________ >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >> mailto:Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion >> list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > mailto:Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > mailto:Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net mailto:Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net mailto:Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan at 8wheels.org Tue Mar 14 19:49:52 2023 From: alan at 8wheels.org (Alan Frank) Date: Tue, 14 Mar 2023 15:49:52 -0400 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> Message-ID: <8342327b128fd5a35ebc16c9106bffee@8wheels.org> "My published prediction 10 years ago that by now AI would... has been proven way too optimistic" Like all other predictions ever made about AI. -------- Original Message -------- Subject: Re: [Hidden-tech] Question about ChatGPT and machine learning Date: 2023-03-14 10:38 From: Rob Laporte via Hidden-discuss To: "Shel Horowitz" Hi All, Good discussion I wish I had more time for, but a few replies to various points made here: * Shell summarizes well: ?But I do raise questions about what it all means for the future of work, careers, creativity, etc?. Also true, ?a new wave of creativity? can happen. * Chatbot-written platforms long preceded gpt, so one can find vastly inferior website content now from those now instantly antiquated platforms. * Fact-checking and editing must happen, but that?s less time than composing all, I think. * I?ve not tested gpt?s writing extensively, but that legal letter I mentioned was truly consummate, and will be used without a single word changed. I have noticed redundancy, though usually resulting from how I asked a question. * Note that one way gpt and the like is and will be trained to improve is via sophisticated AI-human systems by which to detect chatbot text, making the writing output constantly better. * Regarding SEO, a ton to say, but for now: * gpt expedites all kinds of SEO work. * My firm has long prepared for AI, esp last 2 years: * It?s about ?Things not stings? (entities or concepts, not text strings). But yes, keyword research, a model still prevailing in most software that can't change cost-effectively now, will be expedited by gpt. My firm is currently researching to what extent old-fashioned strings correlate to the replacement: Entities within the MUM [2] revolution. * Web marketing pros must coordinate marketing channels efficiently; AI is not good at dovetailing separate silos of knowledge, like SEO, PPC, client profit logic, etc, and it's terrible at prioritizing (within a firm's) context. Hence Shell?s ?creativity.? * My published prediction 10 years ago that by now AI would relieve SEO pros of lots of routine tech work (via Google Search Console) has been proven way too optimistic--we?re still far from that fine day unfortunately. * Extrapolating from the SEO remarks, I believe professionals in most fields will have to stay abreast of adjacent fields of knowledge, so as to be able to efficiently coordinate riders of separate AI horses. Broadly contextual solutions outside single domains of knowledge is something I predict AI will remain deficient in for several years. This goes to Shell?s ?new wave of creativity.? Best Regards, Rob Laporte CEO | R&D Manager DISC - Making Web Sites Make Money Rob at 2disc.com, 413-584-6500 www.2disc.com [3] NOTE: Emails can be blocked by spam filters throughout the web. If you don?t get a reply within an expected span of time, please call. ---- On Sun, 12 Mar 2023 11:57:09 -0400 Shel Horowitz via Hidden-discuss wrote --- > I agree with Marcia. ChatGPT is to a skilled writer as clip-art is > to graphics. It's adequate for basic needs but is no substitute for > a designer working specifically to convey your idea. > > I recently landed on a site that I could tell used a chatbot to > write its copy. I was repelled and left quickly--and crossed that > consultant off my list of people I'd ever hire, because she didn't > disclose that a bot wrote the site. Ugh! > > Shel Horowitz - "The Transformpreneur" > > ________________________________________________ > > Contact me to bake in profitability while addressing hunger, > > poverty, war, and catastrophic climate change > > * First business ever to be Green America Gold Certified > > * Inducted into the National Environmental Hall of Fame > > * Certified speaker: International Platform Association > > https://goingbeyondsustainability.com > > mailto:shel at greenandprofitable.com 413-586-2388 > > Award-winning, best-selling author of 10 books. > > Latest: Guerrilla Marketing to Heal the World > > (co-authored with Jay Conrad Levinson) > > Watch my TEDx Talk, > > "Impossible is a Dare: Business for a Better World" > > http://www.ted.com/tedx/events/11809 > > (move your mouse to "event videos") > > _________________________________________________ > > On Sat, Mar 11, 2023 at 9:45?PM Marcia Yudkin via Hidden-discuss > wrote: > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > >> I'm not sure I would call its output "well-written," though, It's >> competently written, well-organized and with proper grammar, but >> stodgy in style and unimaginative, unless you've given it >> instructions like "write in the style of ____." >> >> On Saturday, March 11, 2023 at 04:22:19 PM HST, Alan Frank >> wrote: >> >> I would use it for composing text, but not for facts at all. And >> if I >> asked it "How would you introduce George Takei at a political >> convention"," I would expect well-written text, but would also >> fact-check everything. >> >> -------- Original Message -------- >> Subject: Re: [Hidden-tech] Question about ChatGPT and machine >> learning >> Date: 2023-03-11 15:55 >> From: Marcia Yudkin via Hidden-discuss >> >> To: Rob Laporte >> >> Rob, >> >> Those are very interesting examples. It's a mixed track record, >> though. Based on your experience, what would you say ChatGPT >> should and >> shouldn't be used for, or how it should or shouldn't be used? >> >> For example, based on the errors in bios you saw, would you still >> use it >> for those artist bios given that you'd have to meticulously fact >> check >> everything it wrote? >> >> Marcia Yudkin >> >> On Saturday, March 11, 2023 at 10:19:20 AM HST, Rob Laporte >> wrote: >> >> I've noticed that humans simplify complex, especially threatening, >> new >> things, by using dichotomies of good and evil, red and blue, etc, >> and >> that conviction is often inversely proportionate to knowledge. >> I've >> worked in search marketing for 28 years, have digested a lot about >> the >> kind of tech and processes underlying chatGPT (gpt), and I have no >> sound >> basis for strong conviction on any of the issues broached here. >> But this >> I can tell you from my mere 6 hours novice use so far: >> >> * In under an hour it solved a complex HR-financial legal >> question, >> and provided the letter the plaintiff should write to the >> corporation's >> HR department, with quality of writing, sensitivity to workplace >> politics, and effective brevity way better than anything I can >> recall in >> my 50 years of adulthood, decade teaching college lit and writing, >> and 3 >> decades in search marketing. Truly stunning. Save at least $2000 >> in >> legal fees that might have gone to a local law firm. >> * A few times over the years I researched best email spam >> blocking >> solutions, and considered an aggressive form of it. gpt explained >> the >> problem with that solution, and did so way faster than my past and >> still >> inclusive Google searches, saving me a few hundred dollars in IT >> consulting. >> * It completely conflated my semi-famous lawyer grandad's bio >> with >> that of his stellar but less accomplished son of the same name. >> Both are >> years deceased (most gpt data ends Sept '21), yet totally wrong. >> * So too it got the bio of a decade-deceased famous scholar of >> Roman >> architecture (friend's dad) wrong on a few points, most notably >> that >> most his career was at Smith college, not Princeton as gpt said. >> * It produced strikingly eloquent spreadsheet solutions for >> two >> different complex purposes. I asked it for the actual spreadsheet, >> and >> cell references were off, but in a second half hour of work, I'm >> sure it >> wold have gotten it right or I could correct it myself. A few >> hours of >> work time saved there, and one of the two tasks was billable. >> * My firm had a prospective writing project for a client, >> involving >> bios of famous and notable painters sold by the client. I say >> "had" >> because now gpt or its structuring within services like NeuroFash >> will >> cut both client and my firm's copywriter time substantially. >> * I've not tried but viewed a YouTube of good, arguably very >> good, >> graphic design for a marketing campaign done in well under half a >> day. >> Outside of broad ideological judgements, there's much to consider >> in how >> gpt will change work and incomes. >> >> The current version 3.5 will be upgraded to 4.0 within weeks. >> Think of >> gpt like the web in 1994 or personal PCs in 1981, with >> advancements >> happening 10x faster. >> >> Best Regards, >> >> Rob Laporte >> CEO | R&D Manager >> DISC - Making Web Sites Make Money >> Rob at 2disc.com, 413-584-6500 >> www.2disc.com [1] >> >> NOTE: Emails can be blocked by spam filters throughout the web. If >> you >> don?t get a reply within an expected span of time, please call. >> >> ---- On Fri, 10 Mar 2023 20:36:26 -0500 Marcia Yudkin via >> Hidden-discuss >> wrote --- >> >>> David, >>> >>> Some great points there. I especially like this one: >>> >>>>> it is *ALL* made up.<< >>> >>> That helps me to dimly understand that everything the chat says >> is >>> simply plausible, no more than that. >>> >>> Maybe we should think of it as no more authoritative than the >> cocktail >>> party chatter of someone who reads indiscriminately and can't >> shut up >>> until they've spewed five paragraphs. >>> >>> Marcia >>> >>> >>> >>> >>> >>> >>> >>> >>> On Friday, March 10, 2023 at 12:34:46 PM HST, R. David Murray >> via >>> Hidden-discuss wrote: >>> >>> >>> >>> >>> >>> From what I understand (admittedly from only a *basic* >> understanding of >>> machine learning), it is not so much that ChatGPT is "making >> errors", >>> but rather that it is "making stuff up", and does not admit that >> it is >>> making stuff up. >>> >>> I'm going to brain dump what I think here, but I'm not an expert >> in >>> this >>> by any stretch, so don't take me as an authority. Perhaps this >> can >>> help >>> you reason about ChartGPT until you find a better expert to >> consult ;) >>> >>> One thing to understand is that this is a *trained* model. That >> means >>> that it was given a set of questions and answers and told "these >> are >>> good, these are bad", probably with a rating of *how* good or >> bad. >>> Then >>> it was given a lot of other data (and how exactly this gets >> turned into >>> questions and answers is *way* beyond my knowledge level). Then >> a team >>> of model trainers started asking questions. The trainers would >> look at >>> the answers it came up with and rate them, thus adding to the >> "trained" >>> data set. When you tell ChatGPT that its answer was good or >> bad, you >>> are also potentially adding to that training data, by the way. >>> >>> I'm guessing that the way the system works there is actually no >> way for >>> it to "know" that it has made something up. The output that it >>> produces >>> is generated based on what you can think of as a very advanced >> version >>> of statistical language modelling: given a certain input, what >> are the >>> most likely kinds of things that would follow as a response? >> And like >>> any statistical model, when you get enough standard deviations >> out, >>> things get weird. At no point in the model output are things >> tagged as >>> "made up" or "not made up": it is *ALL* made up. >>> >>> In the middle of the bell curve the made up things are *much* >> more >>> likely to be "correct" than out at the edges of the bell curve. >> But >>> oh those edges... >>> >>> It is of course more sophisticated than a statistical model, but >> the >>> same principle applies: if there are few examples of *exactly* >> the >>> kind >>> of data your input contains, then it is going to draw from stuff >> that >>> is >>> a lot less closely related to your input for its response. But, >> and >>> here is the important part, it is going to make up *something* >> to >>> answer >>> with. If a source is mentioned multiple times in the context of >> your >>> input, it will use it. If there are no sources mentioned in the >> >>> context >>> of your input, it will generate an output that looks like the >> *kind of >>> thing* that would be a response to that *kind of input*. In >> this case >>> that included a list of articles. It generated at least one of >> them >>> from an author whose name was probably mentioned in the context >> of your >>> input, but never with an actual article name attached. Or maybe >> that >>> author was mentioned in the context of conversations containing >> a >>> subset of the *words* in your input (rather than logically >> formed >>> sentences), depending on just how fuzzy the match was. Then it >>> effectively made up a plausible sounding article name to go with >> the >>> author name, because that's what responses to other similar >> questions >>> in >>> its training data looked like (not similar in content, but >> similar in >>> *form*). >>> >>> So while I agree that making up all the sources seems like an >> extreme >>> example of this, ChatGPT is what Science Fiction calls an >> "Artificial >>> Stupid" (something that can't actually *reason*), and thus I >> think my >>> explanation is plausible. It just depends on how fuzzy the >> match was >>> that it made on the input. If the match was very fuzzy, then it >> would >>> have come back with material from its data that generally >> followed at >>> least some of your input, and then since responses the trainers >>> considered "good" to questions like that usually included some >> sources, >>> it made some up based on how the answers to other, less related, >>> questions looked. >>> >>> Anyone want to bet that four sources was the average number that >> was >>> accepted as "a good answer" by the people who did the training? >> I know >>> I've seen "four things" in a couple of ChatGPT answers, and I >> haven't >>> asked it very many questions :) >>> >>> Given all this, there are only two things you can do, one of >> which is >>> exactly what you did: ask it for the sources. Given *that* >> input, it >>> should be able to come up with the most likely response being >> the >>> actual >>> source. If it can't, then it has probably made up the source >> (note: I >>> have not tested this technique myself, but it follows logically >> from >>> how >>> I think the system works). >>> >>> The second thing you can do (which you probably also already >> did) is to >>> rephrase your input, giving it different amounts and kinds of >> context, >>> and see how the output changes. If your altered input results >> in a >>> less >>> fuzzy match, you will get better answers. >>> >>> The big takeaway, which you clearly already know, is to never >> trust >>> anything ChatGPT produces. Use it as a rough draft, but verify >> all the >>> facts. >>> >>> My fear is that there are going to be a lot of people who aren't >> as >>> diligent, and we'll end up with a lot of made up information out >> on the >>> web adding to all of the maliciously bad information that is >> already >>> out >>> there. I have read that the ChatGPT researchers are worried >> about how >>> to avoid using ChatGPT's output as input to a later ChatGPT >> model, and >>> I >>> have no idea how they are going to achieve that! >>> >>> And keep in mind that that maliciously bad information *is part >> of >>> ChatGPT's data set*. Some of it the people who did the training >> will >>> have >>> caught, but I'm willing to bet they missed a lot of it because >> *they* >>> didn't know it was bad, or it never came up during training. >>> >>> --David >>> >>> On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via >> Hidden-discuss >>> wrote: >>>> Yes, I know that people have been pointing out "ridiculous >> factual >>>> errors" from ChatGPT. However, to make up sources that sound >>>> completely plausible but are fake seems like it belongs in a >> whole >>>> other category. >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank >>>> wrote: >>>> >>>> >>>> >>>> >>>> >>>> ChatGPT is a conversation engine, not a search engine. It is >> designed >>>> to provide plausible responses based on similarity of questions >> and >>>> answers to existing material on the internet, without >> attempting to >>>> correlate its responses with actual facts. Pretty much every >> social >>>> media space I follow has had multiple posts from people >> pointing out >>>> ridiculous factual errors from ChatGPT. >>>> >>>> --Alan >>>> >>>> >>>> -------- Original Message -------- >>>> Subject: [Hidden-tech] Question about ChatGPT and machine >> learning >>>> Date: 2023-03-09 15:29 >>>> From: Marcia Yudkin via Hidden-discuss >>>> >>>> To: "Hidden-discuss at lists.hidden-tech.net" >>>> >>>> >>>> This question is for anyone who understands how the machine >> learning >>>> in >>>> ChatGPT works. >>>> >>>> I've been finding ChatGPT useful for summarizing information >> that is >>>> widely dispersed around the web, such as questions like "what >> are the >>>> most popular objections to X?" However, the other day for a >> blog post >>>> I >>>> was writing I asked it "What are some sources on the >> relationship of X >>>> to Y?" It gave me four sources of information, including the >> article >>>> title, where it was published and who wrote it. >>>> >>>> This looked great, especially since I recognized two of the >> author >>>> names >>>> as authorities on X. However, when I then did a Google search, >> I >>>> could >>>> not track down any of the four articles, either by title, >> author or >>>> place of publication. I tried both in Google and in Bing. >> Zilch! >>>> >>>> Could ChatGPT have totally made up these sources? If so, how >> does >>>> that >>>> work? >>>> >>>> I am baffled about the explanation of this. One of the >> publications >>>> involved was Psychology Today, so we are not talking about >> obscure >>>> corners of the Internet or sites that would have disappeared >> recently. >>>> >>>> Thanks for any insights. >>>> >>>> Marcia Yudkin >>>> Introvert UpThink >>>> Introvert UpThink | Marcia Yudkin | Substack >>>> >>>> >>>> >>>> >>>> >>>> Introvert UpThink | Marcia Yudkin | Substack >>>> Marcia Yudkin >>>> Exploring how introverts are misunderstood, maligned and >>>> underappreciated in our culture - yet still thrive. Cli... >>>> >>>> >>>> _______________________________________________ >>>> Hidden-discuss mailing list - home page: >> http://www.hidden-tech.net >>>> Hidden-discuss at lists.hidden-tech.net >>>> >>>> You are receiving this because you are on the Hidden-Tech >> Discussion >>>> list. >>>> If you would like to change your list preferences, Go to the >> Members >>>> page on the Hidden Tech Web site. >>>> http://www.hidden-tech.net/members >>>> _______________________________________________ >>>> Hidden-discuss mailing list - home page: >> http://www.hidden-tech.net >>>> Hidden-discuss at lists.hidden-tech.net >>>> >>>> You are receiving this because you are on the Hidden-Tech >> Discussion >>>> list. >>>> If you would like to change your list preferences, Go to the >> Members >>>> page on the Hidden Tech Web site. >>>> http://www.hidden-tech.net/members >>> _______________________________________________ >>> Hidden-discuss mailing list - home page: >> http://www.hidden-tech.net >>> Hidden-discuss at lists.hidden-tech.net >>> >>> You are receiving this because you are on the Hidden-Tech >> Discussion >>> list. >>> If you would like to change your list preferences, Go to the >> Members >>> page on the Hidden Tech Web site. >>> http://www.hidden-tech.net/members >>> _______________________________________________ >>> Hidden-discuss mailing list - home page: >> http://www.hidden-tech.net >>> Hidden-discuss at lists.hidden-tech.net >>> >>> You are receiving this because you are on the Hidden-Tech >> Discussion >>> list. >>> If you would like to change your list preferences, Go to the >> Members >>> page on the Hidden Tech Web site. >>> http://www.hidden-tech.net/members >> >> _______________________________________________ >> Hidden-discuss mailing list - home page: >> http://www.hidden-tech.net >> Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech >> Discussion >> list. >> If you would like to change your list preferences, Go to the >> Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members >> _______________________________________________ >> Hidden-discuss mailing list - home page: >> http://www.hidden-tech.net >> Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech >> Discussion list. >> If you would like to change your list preferences, Go to the >> Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members Links: ------ [1] http://www.2disc.com [2] https://searchengineland.com/google-mum-update-seo-future-383551 [3] https://www.2disc.com _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members From rob at 2disc.com Wed Mar 15 17:23:37 2023 From: rob at 2disc.com (Rob Laporte) Date: Wed, 15 Mar 2023 13:23:37 -0400 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> Message-ID: <186e64b8ef3.b5abd5955007282.309483345607273986@2disc.com> Update: I dug into one of the spreadsheets chatgpt produced for me (about Tesla revenue growth needed over five years to double and investment), and though it seemed so clear and smart, it was wrong--in ways many people might not have found.? It also got wrong the answer to: "are money market funds fdic insured?". Still, again, think improvements 10x faster than, say, PCs from 1981+. 11am to noon tomorrow Microsoft is hosting a webinar on AI and work: https://www.linkedin.com/events/7038508574142074880/about/. I rather hope Microsoft can weaken Google's monopoly, which Google has been using to public detriment.? ? Best Regards, Rob Laporte CEO? |? R&D Manager DISC - Making Web Sites?Make Money mailto:Rob at 2disc.com,?413-584-6500 https://www.2disc.com ---- On Tue, 14 Mar 2023 10:38:44 -0400 Rob Laporte via Hidden-discuss wrote --- Hi All, Good discussion I wish I had more time for, but a few replies to various points made here: Shell summarizes well: ?But I do raise questions about what it all means for the future of work, careers, creativity, etc?. Also true, ?a new wave of creativity? can happen. Chatbot-written platforms long preceded gpt, so one can find vastly inferior website content now from those now instantly antiquated platforms.? Fact-checking and editing must happen, but that?s less time than composing all, I think. I?ve not tested gpt?s writing extensively, but that legal letter I mentioned was truly consummate, and will be used without a single word changed. I have noticed redundancy, though usually resulting from how I asked a question.? Note that one way gpt and the like is and will be trained to improve is via sophisticated AI-human systems by which to detect chatbot text, making the writing output constantly better. Regarding SEO, a ton to say, but for now: gpt expedites all kinds of SEO work.? My firm has long prepared for AI, esp last 2 years:? It?s about ?Things not stings? (entities or concepts, not text strings). But yes, keyword research, a model still prevailing in most software that can't change cost-effectively now, will be expedited by gpt. My firm is currently researching to what extent old-fashioned strings correlate to the replacement: Entities within the https://searchengineland.com/google-mum-update-seo-future-383551 revolution. Web marketing pros must coordinate marketing channels efficiently; AI is not good at dovetailing separate silos of knowledge, like SEO, PPC, client profit logic, etc, and it's terrible at prioritizing (within a firm's) context. Hence Shell?s ?creativity.? My published prediction 10 years ago that by now AI would relieve SEO pros of lots of routine tech work (via Google Search Console) has been proven way too optimistic--we?re still far from that fine day unfortunately.? Extrapolating from the SEO remarks, I believe professionals in most fields will have to stay abreast of adjacent fields of knowledge, so as to be able to efficiently coordinate riders of separate AI horses. Broadly contextual solutions outside single domains of knowledge is something I predict AI will remain deficient in for several years. This goes to Shell?s ?new wave of creativity.? Best Regards, Rob Laporte CEO? |? R&D Manager DISC - Making Web Sites?Make Money mailto:Rob at 2disc.com,?413-584-6500 https://www.2disc.com NOTE:?Emails can be blocked by spam filters throughout the web. If you don?t get a reply within an expected span of time, please call. ---- On Sun, 12 Mar 2023 11:57:09 -0400 Shel Horowitz via Hidden-discuss wrote --- _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members I agree with Marcia. ChatGPT is to a skilled writer as clip-art is to graphics. It's adequate for basic needs but is no substitute for a designer working specifically to convey your idea. I recently landed on a site that I could tell used a chatbot to write its copy. I was repelled and left quickly--and crossed that consultant off my list of people I'd ever hire, because she didn't disclose that a bot wrote the site. Ugh! Shel Horowitz - "The Transformpreneur" ________________________________________________ Contact me to bake in profitability while addressing hunger,? poverty, war, and catastrophic climate change * First business ever to be Green America Gold Certified * Inducted into the National Environmental Hall of Fame * Certified speaker: International Platform Association https://goingbeyondsustainability.com? mailto:mailto:shel at greenandprofitable.com 413-586-2388 Award-winning, best-selling author of 10 books.? Latest: Guerrilla Marketing to Heal the World? (co-authored with Jay Conrad Levinson) Watch my TEDx Talk, "Impossible is a Dare: Business for a Better World" http://www.ted.com/tedx/events/11809? (move your mouse to "event videos") _________________________________________________ On Sat, Mar 11, 2023 at 9:45?PM Marcia Yudkin via Hidden-discuss wrote: _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net mailto:Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members I'm not sure I would call its output "well-written," though,? It's competently written, well-organized and with proper grammar, but stodgy in style and unimaginative, unless you've given it instructions like "write in the style of ____." On Saturday, March 11, 2023 at 04:22:19 PM HST, Alan Frank wrote: I would use it for composing text, but not for facts at all.? And if I asked it "How would you introduce George Takei at a political convention"," I would expect well-written text, but would also fact-check everything. -------- Original Message -------- Subject: Re: [Hidden-tech] Question about ChatGPT and machine learning Date: 2023-03-11 15:55 From: Marcia Yudkin via Hidden-discuss To: Rob Laporte Rob, Those are very interesting examples.? It's a mixed track record, though.? Based on your experience, what would you say ChatGPT should and shouldn't be used for, or how it should or shouldn't be used? For example, based on the errors in bios you saw, would you still use it for those artist bios given that you'd have to meticulously fact check everything it wrote? Marcia Yudkin On Saturday, March 11, 2023 at 10:19:20 AM HST, Rob Laporte wrote: I've noticed that humans simplify complex, especially threatening, new things, by using dichotomies of good and evil, red and blue, etc, and that conviction is often inversely proportionate to knowledge. I've worked in search marketing for 28 years, have digested a lot about the kind of tech and processes underlying chatGPT (gpt), and I have no sound basis for strong conviction on any of the issues broached here. But this I can tell you from my mere 6 hours novice use so far: ????* In under an hour it solved a complex HR-financial legal question, and provided the letter the plaintiff should write to the corporation's HR department, with quality of writing, sensitivity to workplace politics, and effective brevity way better than anything I can recall in my 50 years of adulthood, decade teaching college lit and writing, and 3 decades in search marketing. Truly stunning. Save at least $2000 in legal fees that might have gone to a local law firm. ????* A few times over the years I researched best email spam blocking solutions, and considered an aggressive form of it. gpt explained the problem with that solution, and did so way faster than my past and still inclusive Google searches, saving me a few hundred dollars in IT consulting. ????* It completely conflated my semi-famous lawyer grandad's bio with that of his stellar but less accomplished son of the same name. Both are years deceased (most gpt data ends Sept '21), yet totally wrong. ????* So too it got the bio of a decade-deceased famous scholar of Roman architecture (friend's dad) wrong on a few points, most notably that most his career was at Smith college, not Princeton as gpt said.? ????* It produced strikingly eloquent spreadsheet solutions for two different complex purposes. I asked it for the actual spreadsheet, and cell references were off, but in a second half hour of work, I'm sure it wold have gotten it right or I could correct it myself. A few hours of work time saved there, and one of the two tasks was billable.? ????* My firm had a prospective writing project for a client, involving bios of famous and notable painters sold by the client. I say "had" because now gpt or its structuring within services like NeuroFash?will cut both client and my firm's copywriter time substantially.? ????* I've not tried but viewed a YouTube of good, arguably very good, graphic design for a marketing campaign done in well under half a day. Outside of broad ideological judgements, there's much to consider in how gpt will change work and incomes.? The current version 3.5 will be upgraded to 4.0 within weeks. Think of gpt like the web in 1994 or personal PCs in 1981, with advancements happening 10x faster.? Best Regards, Rob Laporte CEO? |? R&D Manager DISC - Making Web Sites?Make Money mailto:Rob at 2disc.com,?413-584-6500 http://www.2disc.com NOTE:?Emails can be blocked by spam filters throughout the web. If you don?t get a reply within an expected span of time, please call. ---- On Fri, 10 Mar 2023 20:36:26 -0500 Marcia Yudkin via Hidden-discuss wrote --- > David, > > Some great points there.? I especially like this one: > >>> it is *ALL* made up.<< > > That helps me to dimly understand that everything the chat says is > simply plausible, no more than that. > > Maybe we should think of it as no more authoritative than the cocktail > party chatter of someone who reads indiscriminately and can't shut up > until they've spewed five paragraphs. > > Marcia > > > > > > > > > On Friday, March 10, 2023 at 12:34:46 PM HST, R. David Murray via > Hidden-discuss wrote: > > > > > > From what I understand (admittedly from only a *basic* understanding of > machine learning), it is not so much that ChatGPT is "making errors", > but rather that it is "making stuff up", and does not admit that it is > making stuff up. > > I'm going to brain dump what I think here, but I'm not an expert in > this > by any stretch, so don't take me as an authority.? Perhaps this can > help > you reason about ChartGPT until you find a better expert to consult ;) > > One thing to understand is that this is a *trained* model.? That means > that it was given a set of questions and answers and told "these are > good, these are bad", probably with a rating of *how* good or bad.? > Then > it was given a lot of other data (and how exactly this gets turned into > questions and answers is *way* beyond my knowledge level).? Then a team > of model trainers started asking questions.? The trainers would look at > the answers it came up with and rate them, thus adding to the "trained" > data set.? When you tell ChatGPT that its answer was good or bad, you > are also potentially adding to that training data, by the way. > > I'm guessing that the way the system works there is actually no way for > it to "know" that it has made something up.? The output that it > produces > is generated based on what you can think of as a very advanced version > of statistical language modelling:? given a certain input, what are the > most likely kinds of things that would follow as a response?? And like > any statistical model, when you get enough standard deviations out, > things get weird.? At no point in the model output are things tagged as > "made up" or "not made up":? it is *ALL* made up. > > In the middle of the bell curve the made up things are *much* more > likely to be "correct" than out at the edges of the bell curve.? But > oh those edges... > > It is of course more sophisticated than a statistical model, but the > same principle applies:? if there are few examples of *exactly* the > kind > of data your input contains, then it is going to draw from stuff that > is > a lot less closely related to your input for its response.? But, and > here is the important part, it is going to make up *something* to > answer > with.? If a source is mentioned multiple times in the context of your > input, it will use it.? If there are no sources mentioned in the > context > of your input, it will generate an output that looks like the *kind of > thing* that would be a response to that *kind of input*.? In this case > that included a list of articles.? It generated at least one of them > from an author whose name was probably mentioned in the context of your > input, but never with an actual article name attached.? Or maybe that > author was mentioned in the context of conversations containing a > subset of the *words* in your input (rather than logically formed > sentences), depending on just how fuzzy the match was.? Then it > effectively made up a plausible sounding article name to go with the > author name, because that's what responses to other similar questions > in > its training data looked like (not similar in content, but similar in > *form*). > > So while I agree that making up all the sources seems like an extreme > example of this, ChatGPT is what Science Fiction calls an "Artificial > Stupid" (something that can't actually *reason*), and thus I think my > explanation is plausible.? It just depends on how fuzzy the match was > that it made on the input.? If the match was very fuzzy, then it would > have come back with material from its data that generally followed at > least some of your input, and then since responses the trainers > considered "good" to questions like that usually included some sources, > it made some up based on how the answers to other, less related, > questions looked. > > Anyone want to bet that four sources was the average number that was > accepted as "a good answer" by the people who did the training?? I know > I've seen "four things" in a couple of ChatGPT answers, and I haven't > asked it very many questions :) > > Given all this, there are only two things you can do, one of which is > exactly what you did: ask it for the sources.? Given *that* input, it > should be able to come up with the most likely response being the > actual > source.? If it can't, then it has probably made up the source (note: I > have not tested this technique myself, but it follows logically from > how > I think the system works). > > The second thing you can do (which you probably also already did) is to > rephrase your input, giving it different amounts and kinds of context, > and see how the output changes.? If your altered input results in a > less > fuzzy match, you will get better answers. > > The big takeaway, which you clearly already know, is to never trust > anything ChatGPT produces.? Use it as a rough draft, but verify all the > facts. > > My fear is that there are going to be a lot of people who aren't as > diligent, and we'll end up with a lot of made up information out on the > web adding to all of the maliciously bad information that is already > out > there.? I have read that the ChatGPT researchers are worried about how > to avoid using ChatGPT's output as input to a later ChatGPT model, and > I > have no idea how they are going to achieve that! > > And keep in mind that that maliciously bad information *is part of > ChatGPT's data set*.? Some of it the people who did the training will > have > caught, but I'm willing to bet they missed a lot of it because *they* > didn't know it was bad, or it never came up during training. > > --David > > On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss > wrote: >> Yes, I know that people have been pointing out "ridiculous factual >> errors" from ChatGPT.? ?However, to make up sources that sound >> completely plausible but are fake seems like it belongs in a whole >> other category. >> >> >> >> >> >> >> On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank >> wrote: >> >> >> >> >> >> ChatGPT is a conversation engine, not a search engine.? It is designed >> to provide plausible responses based on similarity of questions and >> answers to existing material on the internet, without attempting to >> correlate its responses with actual facts.? Pretty much every social >> media space I follow has had multiple posts from people pointing out >> ridiculous factual errors from ChatGPT. >> >> --Alan >> >> >> -------- Original Message -------- >> Subject: [Hidden-tech] Question about ChatGPT and machine learning >> Date: 2023-03-09 15:29 >> From: Marcia Yudkin via Hidden-discuss >> >> To: "mailto:Hidden-discuss at lists.hidden-tech.net" >> >> >> This question is for anyone who understands how the machine learning >> in >> ChatGPT works. >> >> I've been finding ChatGPT useful for summarizing information that is >> widely dispersed around the web, such as questions like "what are the >> most popular objections to X?"? However, the other day for a blog post >> I >> was writing I asked it "What are some sources on the relationship of X >> to Y?"? It gave me four sources of information, including the article >> title, where it was published and who wrote it.?? >> >> This looked great, especially since I recognized two of the author >> names >> as authorities on X.? However, when I then did a Google search, I >> could >> not track down any of the four articles, either by title, author or >> place of publication.? I tried both in Google and in Bing.? Zilch! >> >> Could ChatGPT have totally made up these sources?? If so, how does >> that >> work? >> >> I am baffled about the explanation of this.? One of the publications >> involved was Psychology Today, so we are not talking about obscure >> corners of the Internet or sites that would have disappeared recently. >> >> Thanks for any insights. >> >> Marcia Yudkin >> Introvert UpThink >> Introvert UpThink | Marcia Yudkin | Substack >> >> >> >> >> >> Introvert UpThink | Marcia Yudkin | Substack >> ? Marcia Yudkin >> ? Exploring how introverts are misunderstood, maligned and >> underappreciated in our culture - yet still thrive. Cli... >> >> >> _______________________________________________ >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >> mailto:Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion >> list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members >> _______________________________________________ >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >> mailto:Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion >> list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > mailto:Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > mailto:Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion > list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net mailto:Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net mailto:Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members -------------- next part -------------- An HTML attachment was scrubbed... URL: From yudkinyudkin at yahoo.com Fri Mar 17 00:41:10 2023 From: yudkinyudkin at yahoo.com (Marcia Yudkin) Date: Fri, 17 Mar 2023 00:41:10 +0000 (UTC) Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> Message-ID: <1459188877.732266.1679013670900@mail.yahoo.com> Here is a bit of additional information that came up today in an answer from ChatGPT: >>as of my knowledge cutoff date of 2021.<< Don't know if this is relevant to any part of our discussion, but it's good to know. On Tuesday, March 14, 2023 at 04:39:18 AM HST, Rob Laporte wrote: Hi All, Good discussion I wish I had more time for, but a few replies to various points made here: ????* Shell summarizes well: ?But I do raise questions about what it all means for the future of work, careers, creativity, etc?. Also true, ?a new wave of creativity? can happen. ????* Chatbot-written platforms long preceded gpt, so one can find vastly inferior website content now from those now instantly antiquated platforms.? ????* Fact-checking and editing must happen, but that?s less time than composing all, I think. ????* I?ve not tested gpt?s writing extensively, but that legal letter I mentioned was truly consummate, and will be used without a single word changed. I have noticed redundancy, though usually resulting from how I asked a question.? ????????* Note that one way gpt and the like is and will be trained to improve is via sophisticated AI-human systems by which to detect chatbot text, making the writing output constantly better. ????* Regarding SEO, a ton to say, but for now: ????????* gpt expedites all kinds of SEO work.? ????????* My firm has long prepared for AI, esp last 2 years:? ????????????* It?s about ?Things not stings? (entities or concepts, not text strings). But yes, keyword research, a model still prevailing in most software that can't change cost-effectively now, will be expedited by gpt. My firm is currently researching to what extent old-fashioned strings correlate to the replacement: Entities within the MUM revolution. ????????????* Web marketing pros must coordinate marketing channels efficiently; AI is not good at dovetailing separate silos of knowledge, like SEO, PPC, client profit logic, etc, and it's terrible at prioritizing (within a firm's) context. Hence Shell?s ?creativity.? ????????????* My published prediction 10 years ago that by now AI would relieve SEO pros of lots of routine tech work (via Google Search Console) has been proven way too optimistic--we?re still far from that fine day unfortunately.? ????????* Extrapolating from the SEO remarks, I believe professionals in most fields will have to stay abreast of adjacent fields of knowledge, so as to be able to efficiently coordinate riders of separate AI horses. Broadly contextual solutions outside single domains of knowledge is something I predict AI will remain deficient in for several years. This goes to Shell?s ?new wave of creativity.? Best Regards, Rob Laporte CEO? |? R&D Manager DISC - Making Web Sites?Make Money Rob at 2disc.com,?413-584-6500 www.2disc.com NOTE:?Emails can be blocked by spam filters throughout the web. If you don?t get a reply within an expected span of time, please call. ---- On Sun, 12 Mar 2023 11:57:09 -0400 Shel Horowitz via Hidden-discuss wrote --- > I agree with Marcia. ChatGPT is to a skilled writer as clip-art is to graphics. It's adequate for basic needs but is no substitute for a designer working specifically to convey your idea. > > I recently landed on a site that I could tell used a chatbot to write its copy. I was repelled and left quickly--and crossed that consultant off my list of people I'd ever hire, because she didn't disclose that a bot wrote the site. Ugh! > > > Shel Horowitz - "The Transformpreneur" > ________________________________________________ > Contact me to bake in profitability while addressing hunger,? > poverty, war, and catastrophic climate change > * First business ever to be Green America Gold Certified > * Inducted into the National Environmental Hall of Fame > * Certified speaker: International Platform Association > https://goingbeyondsustainability.com? > mailto:shel at greenandprofitable.com 413-586-2388 > Award-winning, best-selling author of 10 books.? > Latest: Guerrilla Marketing to Heal the World? > (co-authored with Jay Conrad Levinson) > > Watch my TEDx Talk, > "Impossible is a Dare: Business for a Better World" > http://www.ted.com/tedx/events/11809? > (move your mouse to "event videos") > _________________________________________________ > > > > On Sat, Mar 11, 2023 at 9:45?PM Marcia Yudkin via Hidden-discuss wrote: > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members >> I'm not sure I would call its output "well-written," though,? It's competently written, well-organized and with proper grammar, but stodgy in style and unimaginative, unless you've given it instructions like "write in the style of ____." >> >> >> >> >> >> >> On Saturday, March 11, 2023 at 04:22:19 PM HST, Alan Frank wrote: >> >> >> >> >> >> I would use it for composing text, but not for facts at all.? And if I >> asked it "How would you introduce George Takei at a political >> convention"," I would expect well-written text, but would also >> fact-check everything. >> >> >> -------- Original Message -------- >> Subject: Re: [Hidden-tech] Question about ChatGPT and machine learning >> Date: 2023-03-11 15:55 >> From: Marcia Yudkin via Hidden-discuss >> >> To: Rob Laporte >> >> Rob, >> >> Those are very interesting examples.? It's a mixed track record, >> though.? Based on your experience, what would you say ChatGPT should and >> shouldn't be used for, or how it should or shouldn't be used? >> >> For example, based on the errors in bios you saw, would you still use it >> for those artist bios given that you'd have to meticulously fact check >> everything it wrote? >> >> Marcia Yudkin >> >> >> >> >> >> >> On Saturday, March 11, 2023 at 10:19:20 AM HST, Rob Laporte >> wrote: >> >> >> >> >> >> I've noticed that humans simplify complex, especially threatening, new >> things, by using dichotomies of good and evil, red and blue, etc, and >> that conviction is often inversely proportionate to knowledge. I've >> worked in search marketing for 28 years, have digested a lot about the >> kind of tech and processes underlying chatGPT (gpt), and I have no sound >> basis for strong conviction on any of the issues broached here. But this >> I can tell you from my mere 6 hours novice use so far: >> >> ????* In under an hour it solved a complex HR-financial legal question, >> and provided the letter the plaintiff should write to the corporation's >> HR department, with quality of writing, sensitivity to workplace >> politics, and effective brevity way better than anything I can recall in >> my 50 years of adulthood, decade teaching college lit and writing, and 3 >> decades in search marketing. Truly stunning. Save at least $2000 in >> legal fees that might have gone to a local law firm. >> ????* A few times over the years I researched best email spam blocking >> solutions, and considered an aggressive form of it. gpt explained the >> problem with that solution, and did so way faster than my past and still >> inclusive Google searches, saving me a few hundred dollars in IT >> consulting. >> ????* It completely conflated my semi-famous lawyer grandad's bio with >> that of his stellar but less accomplished son of the same name. Both are >> years deceased (most gpt data ends Sept '21), yet totally wrong. >> ????* So too it got the bio of a decade-deceased famous scholar of Roman >> architecture (friend's dad) wrong on a few points, most notably that >> most his career was at Smith college, not Princeton as gpt said.? >> ????* It produced strikingly eloquent spreadsheet solutions for two >> different complex purposes. I asked it for the actual spreadsheet, and >> cell references were off, but in a second half hour of work, I'm sure it >> wold have gotten it right or I could correct it myself. A few hours of >> work time saved there, and one of the two tasks was billable.? >> ????* My firm had a prospective writing project for a client, involving >> bios of famous and notable painters sold by the client. I say "had" >> because now gpt or its structuring within services like NeuroFash?will >> cut both client and my firm's copywriter time substantially.? >> ????* I've not tried but viewed a YouTube of good, arguably very good, >> graphic design for a marketing campaign done in well under half a day. >> Outside of broad ideological judgements, there's much to consider in how >> gpt will change work and incomes.? >> >> The current version 3.5 will be upgraded to 4.0 within weeks. Think of >> gpt like the web in 1994 or personal PCs in 1981, with advancements >> happening 10x faster.? >> >> Best Regards, >> >> Rob Laporte >> CEO? |? R&D Manager >> DISC - Making Web Sites?Make Money >> Rob at 2disc.com,?413-584-6500 >> www.2disc.com >> >> NOTE:?Emails can be blocked by spam filters throughout the web. If you >> don?t get a reply within an expected span of time, please call. >> >> >> >> ---- On Fri, 10 Mar 2023 20:36:26 -0500 Marcia Yudkin via Hidden-discuss >> wrote --- >> >>> David, >>> >>> Some great points there.? I especially like this one: >>> >>>>> it is *ALL* made up.<< >>> >>> That helps me to dimly understand that everything the chat says is >>> simply plausible, no more than that. >>> >>> Maybe we should think of it as no more authoritative than the cocktail >>> party chatter of someone who reads indiscriminately and can't shut up >>> until they've spewed five paragraphs. >>> >>> Marcia >>> >>> >>> >>> >>> >>> >>> >>> >>> On Friday, March 10, 2023 at 12:34:46 PM HST, R. David Murray via >>> Hidden-discuss wrote: >>> >>> >>> >>> >>> >>> From what I understand (admittedly from only a *basic* understanding of >>> machine learning), it is not so much that ChatGPT is "making errors", >>> but rather that it is "making stuff up", and does not admit that it is >>> making stuff up. >>> >>> I'm going to brain dump what I think here, but I'm not an expert in >>> this >>> by any stretch, so don't take me as an authority.? Perhaps this can >>> help >>> you reason about ChartGPT until you find a better expert to consult ;) >>> >>> One thing to understand is that this is a *trained* model.? That means >>> that it was given a set of questions and answers and told "these are >>> good, these are bad", probably with a rating of *how* good or bad.? >>> Then >>> it was given a lot of other data (and how exactly this gets turned into >>> questions and answers is *way* beyond my knowledge level).? Then a team >>> of model trainers started asking questions.? The trainers would look at >>> the answers it came up with and rate them, thus adding to the "trained" >>> data set.? When you tell ChatGPT that its answer was good or bad, you >>> are also potentially adding to that training data, by the way. >>> >>> I'm guessing that the way the system works there is actually no way for >>> it to "know" that it has made something up.? The output that it >>> produces >>> is generated based on what you can think of as a very advanced version >>> of statistical language modelling:? given a certain input, what are the >>> most likely kinds of things that would follow as a response?? And like >>> any statistical model, when you get enough standard deviations out, >>> things get weird.? At no point in the model output are things tagged as >>> "made up" or "not made up":? it is *ALL* made up. >>> >>> In the middle of the bell curve the made up things are *much* more >>> likely to be "correct" than out at the edges of the bell curve.? But >>> oh those edges... >>> >>> It is of course more sophisticated than a statistical model, but the >>> same principle applies:? if there are few examples of *exactly* the >>> kind >>> of data your input contains, then it is going to draw from stuff that >>> is >>> a lot less closely related to your input for its response.? But, and >>> here is the important part, it is going to make up *something* to >>> answer >>> with.? If a source is mentioned multiple times in the context of your >>> input, it will use it.? If there are no sources mentioned in the >>> context >>> of your input, it will generate an output that looks like the *kind of >>> thing* that would be a response to that *kind of input*.? In this case >>> that included a list of articles.? It generated at least one of them >>> from an author whose name was probably mentioned in the context of your >>> input, but never with an actual article name attached.? Or maybe that >>> author was mentioned in the context of conversations containing a >>> subset of the *words* in your input (rather than logically formed >>> sentences), depending on just how fuzzy the match was.? Then it >>> effectively made up a plausible sounding article name to go with the >>> author name, because that's what responses to other similar questions >>> in >>> its training data looked like (not similar in content, but similar in >>> *form*). >>> >>> So while I agree that making up all the sources seems like an extreme >>> example of this, ChatGPT is what Science Fiction calls an "Artificial >>> Stupid" (something that can't actually *reason*), and thus I think my >>> explanation is plausible.? It just depends on how fuzzy the match was >>> that it made on the input.? If the match was very fuzzy, then it would >>> have come back with material from its data that generally followed at >>> least some of your input, and then since responses the trainers >>> considered "good" to questions like that usually included some sources, >>> it made some up based on how the answers to other, less related, >>> questions looked. >>> >>> Anyone want to bet that four sources was the average number that was >>> accepted as "a good answer" by the people who did the training?? I know >>> I've seen "four things" in a couple of ChatGPT answers, and I haven't >>> asked it very many questions :) >>> >>> Given all this, there are only two things you can do, one of which is >>> exactly what you did: ask it for the sources.? Given *that* input, it >>> should be able to come up with the most likely response being the >>> actual >>> source.? If it can't, then it has probably made up the source (note: I >>> have not tested this technique myself, but it follows logically from >>> how >>> I think the system works). >>> >>> The second thing you can do (which you probably also already did) is to >>> rephrase your input, giving it different amounts and kinds of context, >>> and see how the output changes.? If your altered input results in a >>> less >>> fuzzy match, you will get better answers. >>> >>> The big takeaway, which you clearly already know, is to never trust >>> anything ChatGPT produces.? Use it as a rough draft, but verify all the >>> facts. >>> >>> My fear is that there are going to be a lot of people who aren't as >>> diligent, and we'll end up with a lot of made up information out on the >>> web adding to all of the maliciously bad information that is already >>> out >>> there.? I have read that the ChatGPT researchers are worried about how >>> to avoid using ChatGPT's output as input to a later ChatGPT model, and >>> I >>> have no idea how they are going to achieve that! >>> >>> And keep in mind that that maliciously bad information *is part of >>> ChatGPT's data set*.? Some of it the people who did the training will >>> have >>> caught, but I'm willing to bet they missed a lot of it because *they* >>> didn't know it was bad, or it never came up during training. >>> >>> --David >>> >>> On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss >>> wrote: >>>> Yes, I know that people have been pointing out "ridiculous factual >>>> errors" from ChatGPT.? ?However, to make up sources that sound >>>> completely plausible but are fake seems like it belongs in a whole >>>> other category. >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank >>>> wrote: >>>> >>>> >>>> >>>> >>>> >>>> ChatGPT is a conversation engine, not a search engine.? It is designed >>>> to provide plausible responses based on similarity of questions and >>>> answers to existing material on the internet, without attempting to >>>> correlate its responses with actual facts.? Pretty much every social >>>> media space I follow has had multiple posts from people pointing out >>>> ridiculous factual errors from ChatGPT. >>>> >>>> --Alan >>>> >>>> >>>> -------- Original Message -------- >>>> Subject: [Hidden-tech] Question about ChatGPT and machine learning >>>> Date: 2023-03-09 15:29 >>>> From: Marcia Yudkin via Hidden-discuss >>>> >>>> To: "Hidden-discuss at lists.hidden-tech.net" >>>> >>>> >>>> This question is for anyone who understands how the machine learning >>>> in >>>> ChatGPT works. >>>> >>>> I've been finding ChatGPT useful for summarizing information that is >>>> widely dispersed around the web, such as questions like "what are the >>>> most popular objections to X?"? However, the other day for a blog post >>>> I >>>> was writing I asked it "What are some sources on the relationship of X >>>> to Y?"? It gave me four sources of information, including the article >>>> title, where it was published and who wrote it.?? >>>> >>>> This looked great, especially since I recognized two of the author >>>> names >>>> as authorities on X.? However, when I then did a Google search, I >>>> could >>>> not track down any of the four articles, either by title, author or >>>> place of publication.? I tried both in Google and in Bing.? Zilch! >>>> >>>> Could ChatGPT have totally made up these sources?? If so, how does >>>> that >>>> work? >>>> >>>> I am baffled about the explanation of this.? One of the publications >>>> involved was Psychology Today, so we are not talking about obscure >>>> corners of the Internet or sites that would have disappeared recently. >>>> >>>> Thanks for any insights. >>>> >>>> Marcia Yudkin >>>> Introvert UpThink >>>> Introvert UpThink | Marcia Yudkin | Substack >>>> >>>> >>>> >>>> >>>> >>>> Introvert UpThink | Marcia Yudkin | Substack >>>> ? Marcia Yudkin >>>> ? Exploring how introverts are misunderstood, maligned and >>>> underappreciated in our culture - yet still thrive. Cli... >>>> >>>> >>>> _______________________________________________ >>>> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >>>> Hidden-discuss at lists.hidden-tech.net >>>> >>>> You are receiving this because you are on the Hidden-Tech Discussion >>>> list. >>>> If you would like to change your list preferences, Go to the Members >>>> page on the Hidden Tech Web site. >>>> http://www.hidden-tech.net/members >>>> _______________________________________________ >>>> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >>>> Hidden-discuss at lists.hidden-tech.net >>>> >>>> You are receiving this because you are on the Hidden-Tech Discussion >>>> list. >>>> If you would like to change your list preferences, Go to the Members >>>> page on the Hidden Tech Web site. >>>> http://www.hidden-tech.net/members >>> _______________________________________________ >>> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >>> Hidden-discuss at lists.hidden-tech.net >>> >>> You are receiving this because you are on the Hidden-Tech Discussion >>> list. >>> If you would like to change your list preferences, Go to the Members >>> page on the Hidden Tech Web site. >>> http://www.hidden-tech.net/members >>> _______________________________________________ >>> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >>> Hidden-discuss at lists.hidden-tech.net >>> >>> You are receiving this because you are on the Hidden-Tech Discussion >>> list. >>> If you would like to change your list preferences, Go to the Members >>> page on the Hidden Tech Web site. >>> http://www.hidden-tech.net/members >> >> >> _______________________________________________ >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >> Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion >> list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members >> _______________________________________________ >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net >> Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members From tech at montaguewebworks.com Fri Mar 17 21:50:07 2023 From: tech at montaguewebworks.com (Michael Muller) Date: Fri, 17 Mar 2023 17:50:07 -0400 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <1459188877.732266.1679013670900@mail.yahoo.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> <1459188877.732266.1679013670900@mail.yahoo.com> Message-ID: <1c3d8e93-6662-0f02-8a5f-401ef1c18a87@montaguewebworks.com> Excellent story about ChatGPT and OpenAI on NPR today. It's not up on their website yet. Mik Mik Muller, president Montague WebWorks 20 River Street, Greenfield, MA 413-320-5336 http://MontagueWebWorks.com Powered by ROCKETFUSION On 3/16/2023 8:41 PM, Marcia Yudkin via Hidden-discuss wrote: > Here is a bit of additional information that came up today in an answer from ChatGPT: > >>> as of my knowledge cutoff date of 2021.<< > Don't know if this is relevant to any part of our discussion, but it's good to know. > > > > > > > On Tuesday, March 14, 2023 at 04:39:18 AM HST, Rob Laporte wrote: > > > > > > Hi All, > Good discussion I wish I had more time for, but a few replies to various points made here: > ????* Shell summarizes well: ?But I do raise questions about what it all means for the future of work, careers, creativity, etc?. Also true, ?a new wave of creativity? can happen. > ????* Chatbot-written platforms long preceded gpt, so one can find vastly inferior website content now from those now instantly antiquated platforms. > ????* Fact-checking and editing must happen, but that?s less time than composing all, I think. > ????* I?ve not tested gpt?s writing extensively, but that legal letter I mentioned was truly consummate, and will be used without a single word changed. I have noticed redundancy, though usually resulting from how I asked a question. > ????????* Note that one way gpt and the like is and will be trained to improve is via sophisticated AI-human systems by which to detect chatbot text, making the writing output constantly better. > ????* Regarding SEO, a ton to say, but for now: > ????????* gpt expedites all kinds of SEO work. > ????????* My firm has long prepared for AI, esp last 2 years: > ????????????* It?s about ?Things not stings? (entities or concepts, not text strings). But yes, keyword research, a model still prevailing in most software that can't change cost-effectively now, will be expedited by gpt. My firm is currently researching to what extent old-fashioned strings correlate to the replacement: Entities within the MUM revolution. > ????????????* Web marketing pros must coordinate marketing channels efficiently; AI is not good at dovetailing separate silos of knowledge, like SEO, PPC, client profit logic, etc, and it's terrible at prioritizing (within a firm's) context. Hence Shell?s ?creativity.? > ????????????* My published prediction 10 years ago that by now AI would relieve SEO pros of lots of routine tech work (via Google Search Console) has been proven way too optimistic--we?re still far from that fine day unfortunately. > ????????* Extrapolating from the SEO remarks, I believe professionals in most fields will have to stay abreast of adjacent fields of knowledge, so as to be able to efficiently coordinate riders of separate AI horses. Broadly contextual solutions outside single domains of knowledge is something I predict AI will remain deficient in for several years. This goes to Shell?s ?new wave of creativity.? > > Best Regards, > > Rob Laporte > CEO? |? R&D Manager > DISC - Making Web Sites?Make Money > Rob at 2disc.com,?413-584-6500 > www.2disc.com > > NOTE:?Emails can be blocked by spam filters throughout the web. If you don?t get a reply within an expected span of time, please call. > > > > ---- On Sun, 12 Mar 2023 11:57:09 -0400 Shel Horowitz via Hidden-discuss wrote --- > >> I agree with Marcia. ChatGPT is to a skilled writer as clip-art is to graphics. It's adequate for basic needs but is no substitute for a designer working specifically to convey your idea. >> >> I recently landed on a site that I could tell used a chatbot to write its copy. I was repelled and left quickly--and crossed that consultant off my list of people I'd ever hire, because she didn't disclose that a bot wrote the site. Ugh! >> >> >> Shel Horowitz - "The Transformpreneur" >> ________________________________________________ >> Contact me to bake in profitability while addressing hunger, >> poverty, war, and catastrophic climate change >> * First business ever to be Green America Gold Certified >> * Inducted into the National Environmental Hall of Fame >> * Certified speaker: International Platform Association >> https://goingbeyondsustainability.com >> mailto:shel at greenandprofitable.com 413-586-2388 >> Award-winning, best-selling author of 10 books. >> Latest: Guerrilla Marketing to Heal the World >> (co-authored with Jay Conrad Levinson) >> >> Watch my TEDx Talk, >> "Impossible is a Dare: Business for a Better World" http://www.ted.com/tedx/events/11809 (move your mouse to "event videos") >> _________________________________________________ >> >> >> >> On Sat, Mar 11, 2023 at 9:45?PM Marcia Yudkin via Hidden-discuss wrote: >> >> _______________________________________________ >> Hidden-discuss mailing list - home page:http://www.hidden-tech.net >> Hidden-discuss at lists.hidden-tech.net >> >> You are receiving this because you are on the Hidden-Tech Discussion list. >> If you would like to change your list preferences, Go to the Members >> page on the Hidden Tech Web site. >> http://www.hidden-tech.net/members >>> I'm not sure I would call its output "well-written," though,? It's competently written, well-organized and with proper grammar, but stodgy in style and unimaginative, unless you've given it instructions like "write in the style of ____." >>> >>> >>> >>> >>> >>> >>> On Saturday, March 11, 2023 at 04:22:19 PM HST, Alan Frank wrote: >>> >>> >>> >>> >>> >>> I would use it for composing text, but not for facts at all.? And if I >>> asked it "How would you introduce George Takei at a political >>> convention"," I would expect well-written text, but would also >>> fact-check everything. >>> >>> >>> -------- Original Message -------- >>> Subject: Re: [Hidden-tech] Question about ChatGPT and machine learning >>> Date: 2023-03-11 15:55 >>> From: Marcia Yudkin via Hidden-discuss >>> >>> To: Rob Laporte >>> >>> Rob, >>> >>> Those are very interesting examples.? It's a mixed track record, >>> though.? Based on your experience, what would you say ChatGPT should and >>> shouldn't be used for, or how it should or shouldn't be used? >>> >>> For example, based on the errors in bios you saw, would you still use it >>> for those artist bios given that you'd have to meticulously fact check >>> everything it wrote? >>> >>> Marcia Yudkin >>> >>> >>> >>> >>> >>> >>> On Saturday, March 11, 2023 at 10:19:20 AM HST, Rob Laporte >>> wrote: >>> >>> >>> >>> >>> >>> I've noticed that humans simplify complex, especially threatening, new >>> things, by using dichotomies of good and evil, red and blue, etc, and >>> that conviction is often inversely proportionate to knowledge. I've >>> worked in search marketing for 28 years, have digested a lot about the >>> kind of tech and processes underlying chatGPT (gpt), and I have no sound >>> basis for strong conviction on any of the issues broached here. But this >>> I can tell you from my mere 6 hours novice use so far: >>> >>> ????* In under an hour it solved a complex HR-financial legal question, >>> and provided the letter the plaintiff should write to the corporation's >>> HR department, with quality of writing, sensitivity to workplace >>> politics, and effective brevity way better than anything I can recall in >>> my 50 years of adulthood, decade teaching college lit and writing, and 3 >>> decades in search marketing. Truly stunning. Save at least $2000 in >>> legal fees that might have gone to a local law firm. >>> ????* A few times over the years I researched best email spam blocking >>> solutions, and considered an aggressive form of it. gpt explained the >>> problem with that solution, and did so way faster than my past and still >>> inclusive Google searches, saving me a few hundred dollars in IT >>> consulting. >>> ????* It completely conflated my semi-famous lawyer grandad's bio with >>> that of his stellar but less accomplished son of the same name. Both are >>> years deceased (most gpt data ends Sept '21), yet totally wrong. >>> ????* So too it got the bio of a decade-deceased famous scholar of Roman >>> architecture (friend's dad) wrong on a few points, most notably that >>> most his career was at Smith college, not Princeton as gpt said. >>> ????* It produced strikingly eloquent spreadsheet solutions for two >>> different complex purposes. I asked it for the actual spreadsheet, and >>> cell references were off, but in a second half hour of work, I'm sure it >>> wold have gotten it right or I could correct it myself. A few hours of >>> work time saved there, and one of the two tasks was billable. >>> ????* My firm had a prospective writing project for a client, involving >>> bios of famous and notable painters sold by the client. I say "had" >>> because now gpt or its structuring within services like NeuroFash?will >>> cut both client and my firm's copywriter time substantially. >>> ????* I've not tried but viewed a YouTube of good, arguably very good, >>> graphic design for a marketing campaign done in well under half a day. >>> Outside of broad ideological judgements, there's much to consider in how >>> gpt will change work and incomes. >>> >>> The current version 3.5 will be upgraded to 4.0 within weeks. Think of >>> gpt like the web in 1994 or personal PCs in 1981, with advancements >>> happening 10x faster. >>> >>> Best Regards, >>> >>> Rob Laporte >>> CEO? |? R&D Manager >>> DISC - Making Web Sites?Make Money >>> Rob at 2disc.com,?413-584-6500 >>> www.2disc.com >>> >>> NOTE:?Emails can be blocked by spam filters throughout the web. If you >>> don?t get a reply within an expected span of time, please call. >>> >>> >>> >>> ---- On Fri, 10 Mar 2023 20:36:26 -0500 Marcia Yudkin via Hidden-discuss >>> wrote --- >>> >>>> David, >>>> >>>> Some great points there.? I especially like this one: >>>> >>>>>> it is *ALL* made up.<< >>>> That helps me to dimly understand that everything the chat says is >>>> simply plausible, no more than that. >>>> >>>> Maybe we should think of it as no more authoritative than the cocktail >>>> party chatter of someone who reads indiscriminately and can't shut up >>>> until they've spewed five paragraphs. >>>> >>>> Marcia >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> On Friday, March 10, 2023 at 12:34:46 PM HST, R. David Murray via >>>> Hidden-discuss wrote: >>>> >>>> >>>> >>>> >>>> >>>> From what I understand (admittedly from only a *basic* understanding of >>>> machine learning), it is not so much that ChatGPT is "making errors", >>>> but rather that it is "making stuff up", and does not admit that it is >>>> making stuff up. >>>> >>>> I'm going to brain dump what I think here, but I'm not an expert in >>>> this >>>> by any stretch, so don't take me as an authority.? Perhaps this can >>>> help >>>> you reason about ChartGPT until you find a better expert to consult ;) >>>> >>>> One thing to understand is that this is a *trained* model.? That means >>>> that it was given a set of questions and answers and told "these are >>>> good, these are bad", probably with a rating of *how* good or bad. >>>> Then >>>> it was given a lot of other data (and how exactly this gets turned into >>>> questions and answers is *way* beyond my knowledge level).? Then a team >>>> of model trainers started asking questions.? The trainers would look at >>>> the answers it came up with and rate them, thus adding to the "trained" >>>> data set.? When you tell ChatGPT that its answer was good or bad, you >>>> are also potentially adding to that training data, by the way. >>>> >>>> I'm guessing that the way the system works there is actually no way for >>>> it to "know" that it has made something up.? The output that it >>>> produces >>>> is generated based on what you can think of as a very advanced version >>>> of statistical language modelling:? given a certain input, what are the >>>> most likely kinds of things that would follow as a response?? And like >>>> any statistical model, when you get enough standard deviations out, >>>> things get weird.? At no point in the model output are things tagged as >>>> "made up" or "not made up":? it is *ALL* made up. >>>> >>>> In the middle of the bell curve the made up things are *much* more >>>> likely to be "correct" than out at the edges of the bell curve.? But >>>> oh those edges... >>>> >>>> It is of course more sophisticated than a statistical model, but the >>>> same principle applies:? if there are few examples of *exactly* the >>>> kind >>>> of data your input contains, then it is going to draw from stuff that >>>> is >>>> a lot less closely related to your input for its response.? But, and >>>> here is the important part, it is going to make up *something* to >>>> answer >>>> with.? If a source is mentioned multiple times in the context of your >>>> input, it will use it.? If there are no sources mentioned in the >>>> context >>>> of your input, it will generate an output that looks like the *kind of >>>> thing* that would be a response to that *kind of input*.? In this case >>>> that included a list of articles.? It generated at least one of them >>>> from an author whose name was probably mentioned in the context of your >>>> input, but never with an actual article name attached.? Or maybe that >>>> author was mentioned in the context of conversations containing a >>>> subset of the *words* in your input (rather than logically formed >>>> sentences), depending on just how fuzzy the match was.? Then it >>>> effectively made up a plausible sounding article name to go with the >>>> author name, because that's what responses to other similar questions >>>> in >>>> its training data looked like (not similar in content, but similar in >>>> *form*). >>>> >>>> So while I agree that making up all the sources seems like an extreme >>>> example of this, ChatGPT is what Science Fiction calls an "Artificial >>>> Stupid" (something that can't actually *reason*), and thus I think my >>>> explanation is plausible.? It just depends on how fuzzy the match was >>>> that it made on the input.? If the match was very fuzzy, then it would >>>> have come back with material from its data that generally followed at >>>> least some of your input, and then since responses the trainers >>>> considered "good" to questions like that usually included some sources, >>>> it made some up based on how the answers to other, less related, >>>> questions looked. >>>> >>>> Anyone want to bet that four sources was the average number that was >>>> accepted as "a good answer" by the people who did the training?? I know >>>> I've seen "four things" in a couple of ChatGPT answers, and I haven't >>>> asked it very many questions :) >>>> >>>> Given all this, there are only two things you can do, one of which is >>>> exactly what you did: ask it for the sources.? Given *that* input, it >>>> should be able to come up with the most likely response being the >>>> actual >>>> source.? If it can't, then it has probably made up the source (note: I >>>> have not tested this technique myself, but it follows logically from >>>> how >>>> I think the system works). >>>> >>>> The second thing you can do (which you probably also already did) is to >>>> rephrase your input, giving it different amounts and kinds of context, >>>> and see how the output changes.? If your altered input results in a >>>> less >>>> fuzzy match, you will get better answers. >>>> >>>> The big takeaway, which you clearly already know, is to never trust >>>> anything ChatGPT produces.? Use it as a rough draft, but verify all the >>>> facts. >>>> >>>> My fear is that there are going to be a lot of people who aren't as >>>> diligent, and we'll end up with a lot of made up information out on the >>>> web adding to all of the maliciously bad information that is already >>>> out >>>> there.? I have read that the ChatGPT researchers are worried about how >>>> to avoid using ChatGPT's output as input to a later ChatGPT model, and >>>> I >>>> have no idea how they are going to achieve that! >>>> >>>> And keep in mind that that maliciously bad information *is part of >>>> ChatGPT's data set*.? Some of it the people who did the training will >>>> have >>>> caught, but I'm willing to bet they missed a lot of it because *they* >>>> didn't know it was bad, or it never came up during training. >>>> >>>> --David >>>> >>>> On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss >>>> wrote: >>>>> Yes, I know that people have been pointing out "ridiculous factual >>>>> errors" from ChatGPT.? ?However, to make up sources that sound >>>>> completely plausible but are fake seems like it belongs in a whole >>>>> other category. >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank >>>>> wrote: >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> ChatGPT is a conversation engine, not a search engine.? It is designed >>>>> to provide plausible responses based on similarity of questions and >>>>> answers to existing material on the internet, without attempting to >>>>> correlate its responses with actual facts.? Pretty much every social >>>>> media space I follow has had multiple posts from people pointing out >>>>> ridiculous factual errors from ChatGPT. >>>>> >>>>> --Alan >>>>> >>>>> >>>>> -------- Original Message -------- >>>>> Subject: [Hidden-tech] Question about ChatGPT and machine learning >>>>> Date: 2023-03-09 15:29 >>>>> From: Marcia Yudkin via Hidden-discuss >>>>> >>>>> To:"Hidden-discuss at lists.hidden-tech.net" >>>>> >>>>> >>>>> This question is for anyone who understands how the machine learning >>>>> in >>>>> ChatGPT works. >>>>> >>>>> I've been finding ChatGPT useful for summarizing information that is >>>>> widely dispersed around the web, such as questions like "what are the >>>>> most popular objections to X?"? However, the other day for a blog post >>>>> I >>>>> was writing I asked it "What are some sources on the relationship of X >>>>> to Y?"? It gave me four sources of information, including the article >>>>> title, where it was published and who wrote it. >>>>> >>>>> This looked great, especially since I recognized two of the author >>>>> names >>>>> as authorities on X.? However, when I then did a Google search, I >>>>> could >>>>> not track down any of the four articles, either by title, author or >>>>> place of publication.? I tried both in Google and in Bing.? Zilch! >>>>> >>>>> Could ChatGPT have totally made up these sources?? If so, how does >>>>> that >>>>> work? >>>>> >>>>> I am baffled about the explanation of this.? One of the publications >>>>> involved was Psychology Today, so we are not talking about obscure >>>>> corners of the Internet or sites that would have disappeared recently. >>>>> >>>>> Thanks for any insights. >>>>> >>>>> Marcia Yudkin >>>>> Introvert UpThink >>>>> Introvert UpThink | Marcia Yudkin | Substack >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> Introvert UpThink | Marcia Yudkin | Substack >>>>> ? Marcia Yudkin >>>>> ? Exploring how introverts are misunderstood, maligned and >>>>> underappreciated in our culture - yet still thrive. Cli... >>>>> >>>>> >>>>> _______________________________________________ >>>>> Hidden-discuss mailing list - home page:http://www.hidden-tech.net >>>>> Hidden-discuss at lists.hidden-tech.net >>>>> >>>>> You are receiving this because you are on the Hidden-Tech Discussion >>>>> list. >>>>> If you would like to change your list preferences, Go to the Members >>>>> page on the Hidden Tech Web site. >>>>> http://www.hidden-tech.net/members >>>>> _______________________________________________ >>>>> Hidden-discuss mailing list - home page:http://www.hidden-tech.net >>>>> Hidden-discuss at lists.hidden-tech.net >>>>> >>>>> You are receiving this because you are on the Hidden-Tech Discussion >>>>> list. >>>>> If you would like to change your list preferences, Go to the Members >>>>> page on the Hidden Tech Web site. >>>>> http://www.hidden-tech.net/members >>>> _______________________________________________ >>>> Hidden-discuss mailing list - home page:http://www.hidden-tech.net >>>> Hidden-discuss at lists.hidden-tech.net >>>> >>>> You are receiving this because you are on the Hidden-Tech Discussion >>>> list. >>>> If you would like to change your list preferences, Go to the Members >>>> page on the Hidden Tech Web site. >>>> http://www.hidden-tech.net/members >>>> _______________________________________________ >>>> Hidden-discuss mailing list - home page:http://www.hidden-tech.net >>>> Hidden-discuss at lists.hidden-tech.net >>>> >>>> You are receiving this because you are on the Hidden-Tech Discussion >>>> list. >>>> If you would like to change your list preferences, Go to the Members >>>> page on the Hidden Tech Web site. >>>> http://www.hidden-tech.net/members >>> >>> _______________________________________________ >>> Hidden-discuss mailing list - home page:http://www.hidden-tech.net >>> Hidden-discuss at lists.hidden-tech.net >>> >>> You are receiving this because you are on the Hidden-Tech Discussion >>> list. >>> If you would like to change your list preferences, Go to the Members >>> page on the Hidden Tech Web site. >>> http://www.hidden-tech.net/members >>> _______________________________________________ >>> Hidden-discuss mailing list - home page:http://www.hidden-tech.net >>> Hidden-discuss at lists.hidden-tech.net >>> >>> You are receiving this because you are on the Hidden-Tech Discussion list. >>> If you would like to change your list preferences, Go to the Members >>> page on the Hidden Tech Web site. >>> http://www.hidden-tech.net/members > > _______________________________________________ > Hidden-discuss mailing list - home page:http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members -------------- next part -------------- An HTML attachment was scrubbed... URL: From niftiness at gmail.com Fri Mar 17 23:16:56 2023 From: niftiness at gmail.com (Tim Boudreau) Date: Fri, 17 Mar 2023 19:16:56 -0400 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <1459188877.732266.1679013670900@mail.yahoo.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> <1459188877.732266.1679013670900@mail.yahoo.com> Message-ID: ChatGPT will definitely change work that involves writing. Thus far, I've personally used it for amusement purposes; as a songwriter a lot of whose muse left with the end of teenage/twentysomething angst, I can say it's a useful tool when you have a good beginning to something and then ... nothing ... to get over that hump and find a path to finishing it. As a technologist, I think a few things are predictable. The history of every new communication medium in the last few hundred years is that barriers to entry come down, and it eventually becomes overrun by erstwhile commercial speech to the point of uselessness. Cases in point: - We still have a landline from when my dad was with us, since cell phones were not his thing. We do not ever answer it because it is all spam robo-calls. - 95% of the postal mail we get goes straight in the trash - I am reading this thread at all due to having finally cleaned out more than 100,000 unread messages 99% of which were junk or notifications - and that's *with* GMail's state-of-the-art filtering. I literally have looked at email once every few *months *the last few years ChatGPT finally realizes the possibility of generating content at such a scale that it seems predictable that within a couple of years, 99% of the content on the web will not have been written by humans (interestingly, this also makes ChatGPT the last generation of AI of its kind that won't be modeling the output of previous generations of itself - that will become an increasing problem for future generations of it and similar). Finally we have the technology to render the web useless :-) Companies will pop up - I'm sure they already have - claiming they have software that can detect AI generated content. That software will be used to train subsequent generations of AIs in the art of evading those detectors, and so on, in a permanent war of attrition. This will never actually work. Everyone will use it for their own individual, time and work-saving reasons. The collective consequences will make the web look very different - perhaps better edited, but with even more plausible-but-wrong content than it has today. That's not to say it's the apocalypse - just a logical consequence. In the long run, what *would* work is a web where every piece of content is cryptographically signed, and attributable to an actual human, and handled by browsers and similar the same way the "green bar" for https is. But anything like that is years away. We live in interesting times. -Tim On Thu, Mar 16, 2023 at 10:09?PM Marcia Yudkin via Hidden-discuss < hidden-discuss at lists.hidden-tech.net> wrote: > Here is a bit of additional information that came up today in an answer > from ChatGPT: > > >>as of my knowledge cutoff date of 2021.<< > > Don't know if this is relevant to any part of our discussion, but it's > good to know. > > > > > > > On Tuesday, March 14, 2023 at 04:39:18 AM HST, Rob Laporte > wrote: > > > > > > Hi All, > Good discussion I wish I had more time for, but a few replies to various > points made here: > * Shell summarizes well: ?But I do raise questions about what it all > means for the future of work, careers, creativity, etc?. Also true, ?a new > wave of creativity? can happen. > * Chatbot-written platforms long preceded gpt, so one can find vastly > inferior website content now from those now instantly antiquated platforms. > * Fact-checking and editing must happen, but that?s less time than > composing all, I think. > * I?ve not tested gpt?s writing extensively, but that legal letter I > mentioned was truly consummate, and will be used without a single word > changed. I have noticed redundancy, though usually resulting from how I > asked a question. > * Note that one way gpt and the like is and will be trained to > improve is via sophisticated AI-human systems by which to detect chatbot > text, making the writing output constantly better. > * Regarding SEO, a ton to say, but for now: > * gpt expedites all kinds of SEO work. > * My firm has long prepared for AI, esp last 2 years: > * It?s about ?Things not stings? (entities or concepts, not > text strings). But yes, keyword research, a model still prevailing in most > software that can't change cost-effectively now, will be expedited by gpt. > My firm is currently researching to what extent old-fashioned strings > correlate to the replacement: Entities within the MUM revolution. > * Web marketing pros must coordinate marketing channels > efficiently; AI is not good at dovetailing separate silos of knowledge, > like SEO, PPC, client profit logic, etc, and it's terrible at prioritizing > (within a firm's) context. Hence Shell?s ?creativity.? > * My published prediction 10 years ago that by now AI would > relieve SEO pros of lots of routine tech work (via Google Search Console) > has been proven way too optimistic--we?re still far from that fine day > unfortunately. > * Extrapolating from the SEO remarks, I believe professionals in > most fields will have to stay abreast of adjacent fields of knowledge, so > as to be able to efficiently coordinate riders of separate AI horses. > Broadly contextual solutions outside single domains of knowledge is > something I predict AI will remain deficient in for several years. This > goes to Shell?s ?new wave of creativity.? > > Best Regards, > > Rob Laporte > CEO | R&D Manager > DISC - Making Web Sites Make Money > Rob at 2disc.com, 413-584-6500 > www.2disc.com > > NOTE: Emails can be blocked by spam filters throughout the web. If you > don?t get a reply within an expected span of time, please call. > > > > ---- On Sun, 12 Mar 2023 11:57:09 -0400 Shel Horowitz via Hidden-discuss < > hidden-discuss at lists.hidden-tech.net> wrote --- > > > I agree with Marcia. ChatGPT is to a skilled writer as clip-art is to > graphics. It's adequate for basic needs but is no substitute for a designer > working specifically to convey your idea. > > > > I recently landed on a site that I could tell used a chatbot to write > its copy. I was repelled and left quickly--and crossed that consultant off > my list of people I'd ever hire, because she didn't disclose that a bot > wrote the site. Ugh! > > > > > > Shel Horowitz - "The Transformpreneur" > > ________________________________________________ > > Contact me to bake in profitability while addressing hunger, > > poverty, war, and catastrophic climate change > > * First business ever to be Green America Gold Certified > > * Inducted into the National Environmental Hall of Fame > > * Certified speaker: International Platform Association > > https://goingbeyondsustainability.com > > mailto:shel at greenandprofitable.com 413-586-2388 > > Award-winning, best-selling author of 10 books. > > Latest: Guerrilla Marketing to Heal the World > > (co-authored with Jay Conrad Levinson) > > > > Watch my TEDx Talk, > > "Impossible is a Dare: Business for a Better World" > > http://www.ted.com/tedx/events/11809 > > (move your mouse to "event videos") > > _________________________________________________ > > > > > > > > On Sat, Mar 11, 2023 at 9:45?PM Marcia Yudkin via Hidden-discuss < > hidden-discuss at lists.hidden-tech.net> wrote: > > > > _______________________________________________ > > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > > Hidden-discuss at lists.hidden-tech.net > > > > You are receiving this because you are on the Hidden-Tech Discussion > list. > > If you would like to change your list preferences, Go to the Members > > page on the Hidden Tech Web site. > > http://www.hidden-tech.net/members > >> I'm not sure I would call its output "well-written," though, It's > competently written, well-organized and with proper grammar, but stodgy in > style and unimaginative, unless you've given it instructions like "write in > the style of ____." > >> > >> > >> > >> > >> > >> > >> On Saturday, March 11, 2023 at 04:22:19 PM HST, Alan Frank < > alan at 8wheels.org> wrote: > >> > >> > >> > >> > >> > >> I would use it for composing text, but not for facts at all. And if I > >> asked it "How would you introduce George Takei at a political > >> convention"," I would expect well-written text, but would also > >> fact-check everything. > >> > >> > >> -------- Original Message -------- > >> Subject: Re: [Hidden-tech] Question about ChatGPT and machine learning > >> Date: 2023-03-11 15:55 > >> From: Marcia Yudkin via Hidden-discuss > >> > >> To: Rob Laporte > >> > >> Rob, > >> > >> Those are very interesting examples. It's a mixed track record, > >> though. Based on your experience, what would you say ChatGPT should > and > >> shouldn't be used for, or how it should or shouldn't be used? > >> > >> For example, based on the errors in bios you saw, would you still use > it > >> for those artist bios given that you'd have to meticulously fact check > >> everything it wrote? > >> > >> Marcia Yudkin > >> > >> > >> > >> > >> > >> > >> On Saturday, March 11, 2023 at 10:19:20 AM HST, Rob Laporte > >> wrote: > >> > >> > >> > >> > >> > >> I've noticed that humans simplify complex, especially threatening, new > >> things, by using dichotomies of good and evil, red and blue, etc, and > >> that conviction is often inversely proportionate to knowledge. I've > >> worked in search marketing for 28 years, have digested a lot about the > >> kind of tech and processes underlying chatGPT (gpt), and I have no > sound > >> basis for strong conviction on any of the issues broached here. But > this > >> I can tell you from my mere 6 hours novice use so far: > >> > >> * In under an hour it solved a complex HR-financial legal question, > >> and provided the letter the plaintiff should write to the corporation's > >> HR department, with quality of writing, sensitivity to workplace > >> politics, and effective brevity way better than anything I can recall > in > >> my 50 years of adulthood, decade teaching college lit and writing, and > 3 > >> decades in search marketing. Truly stunning. Save at least $2000 in > >> legal fees that might have gone to a local law firm. > >> * A few times over the years I researched best email spam blocking > >> solutions, and considered an aggressive form of it. gpt explained the > >> problem with that solution, and did so way faster than my past and > still > >> inclusive Google searches, saving me a few hundred dollars in IT > >> consulting. > >> * It completely conflated my semi-famous lawyer grandad's bio with > >> that of his stellar but less accomplished son of the same name. Both > are > >> years deceased (most gpt data ends Sept '21), yet totally wrong. > >> * So too it got the bio of a decade-deceased famous scholar of > Roman > >> architecture (friend's dad) wrong on a few points, most notably that > >> most his career was at Smith college, not Princeton as gpt said. > >> * It produced strikingly eloquent spreadsheet solutions for two > >> different complex purposes. I asked it for the actual spreadsheet, and > >> cell references were off, but in a second half hour of work, I'm sure > it > >> wold have gotten it right or I could correct it myself. A few hours of > >> work time saved there, and one of the two tasks was billable. > >> * My firm had a prospective writing project for a client, involving > >> bios of famous and notable painters sold by the client. I say "had" > >> because now gpt or its structuring within services like NeuroFash will > >> cut both client and my firm's copywriter time substantially. > >> * I've not tried but viewed a YouTube of good, arguably very good, > >> graphic design for a marketing campaign done in well under half a day. > >> Outside of broad ideological judgements, there's much to consider in > how > >> gpt will change work and incomes. > >> > >> The current version 3.5 will be upgraded to 4.0 within weeks. Think of > >> gpt like the web in 1994 or personal PCs in 1981, with advancements > >> happening 10x faster. > >> > >> Best Regards, > >> > >> Rob Laporte > >> CEO | R&D Manager > >> DISC - Making Web Sites Make Money > >> Rob at 2disc.com, 413-584-6500 > >> www.2disc.com > >> > >> NOTE: Emails can be blocked by spam filters throughout the web. If you > >> don?t get a reply within an expected span of time, please call. > >> > >> > >> > >> ---- On Fri, 10 Mar 2023 20:36:26 -0500 Marcia Yudkin via > Hidden-discuss > >> wrote --- > >> > >>> David, > >>> > >>> Some great points there. I especially like this one: > >>> > >>>>> it is *ALL* made up.<< > >>> > >>> That helps me to dimly understand that everything the chat says is > >>> simply plausible, no more than that. > >>> > >>> Maybe we should think of it as no more authoritative than the cocktail > >>> party chatter of someone who reads indiscriminately and can't shut up > >>> until they've spewed five paragraphs. > >>> > >>> Marcia > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> On Friday, March 10, 2023 at 12:34:46 PM HST, R. David Murray via > >>> Hidden-discuss wrote: > >>> > >>> > >>> > >>> > >>> > >>> From what I understand (admittedly from only a *basic* understanding of > >>> machine learning), it is not so much that ChatGPT is "making errors", > >>> but rather that it is "making stuff up", and does not admit that it is > >>> making stuff up. > >>> > >>> I'm going to brain dump what I think here, but I'm not an expert in > >>> this > >>> by any stretch, so don't take me as an authority. Perhaps this can > >>> help > >>> you reason about ChartGPT until you find a better expert to consult ;) > >>> > >>> One thing to understand is that this is a *trained* model. That means > >>> that it was given a set of questions and answers and told "these are > >>> good, these are bad", probably with a rating of *how* good or bad. > >>> Then > >>> it was given a lot of other data (and how exactly this gets turned into > >>> questions and answers is *way* beyond my knowledge level). Then a team > >>> of model trainers started asking questions. The trainers would look at > >>> the answers it came up with and rate them, thus adding to the "trained" > >>> data set. When you tell ChatGPT that its answer was good or bad, you > >>> are also potentially adding to that training data, by the way. > >>> > >>> I'm guessing that the way the system works there is actually no way for > >>> it to "know" that it has made something up. The output that it > >>> produces > >>> is generated based on what you can think of as a very advanced version > >>> of statistical language modelling: given a certain input, what are the > >>> most likely kinds of things that would follow as a response? And like > >>> any statistical model, when you get enough standard deviations out, > >>> things get weird. At no point in the model output are things tagged as > >>> "made up" or "not made up": it is *ALL* made up. > >>> > >>> In the middle of the bell curve the made up things are *much* more > >>> likely to be "correct" than out at the edges of the bell curve. But > >>> oh those edges... > >>> > >>> It is of course more sophisticated than a statistical model, but the > >>> same principle applies: if there are few examples of *exactly* the > >>> kind > >>> of data your input contains, then it is going to draw from stuff that > >>> is > >>> a lot less closely related to your input for its response. But, and > >>> here is the important part, it is going to make up *something* to > >>> answer > >>> with. If a source is mentioned multiple times in the context of your > >>> input, it will use it. If there are no sources mentioned in the > >>> context > >>> of your input, it will generate an output that looks like the *kind of > >>> thing* that would be a response to that *kind of input*. In this case > >>> that included a list of articles. It generated at least one of them > >>> from an author whose name was probably mentioned in the context of your > >>> input, but never with an actual article name attached. Or maybe that > >>> author was mentioned in the context of conversations containing a > >>> subset of the *words* in your input (rather than logically formed > >>> sentences), depending on just how fuzzy the match was. Then it > >>> effectively made up a plausible sounding article name to go with the > >>> author name, because that's what responses to other similar questions > >>> in > >>> its training data looked like (not similar in content, but similar in > >>> *form*). > >>> > >>> So while I agree that making up all the sources seems like an extreme > >>> example of this, ChatGPT is what Science Fiction calls an "Artificial > >>> Stupid" (something that can't actually *reason*), and thus I think my > >>> explanation is plausible. It just depends on how fuzzy the match was > >>> that it made on the input. If the match was very fuzzy, then it would > >>> have come back with material from its data that generally followed at > >>> least some of your input, and then since responses the trainers > >>> considered "good" to questions like that usually included some sources, > >>> it made some up based on how the answers to other, less related, > >>> questions looked. > >>> > >>> Anyone want to bet that four sources was the average number that was > >>> accepted as "a good answer" by the people who did the training? I know > >>> I've seen "four things" in a couple of ChatGPT answers, and I haven't > >>> asked it very many questions :) > >>> > >>> Given all this, there are only two things you can do, one of which is > >>> exactly what you did: ask it for the sources. Given *that* input, it > >>> should be able to come up with the most likely response being the > >>> actual > >>> source. If it can't, then it has probably made up the source (note: I > >>> have not tested this technique myself, but it follows logically from > >>> how > >>> I think the system works). > >>> > >>> The second thing you can do (which you probably also already did) is to > >>> rephrase your input, giving it different amounts and kinds of context, > >>> and see how the output changes. If your altered input results in a > >>> less > >>> fuzzy match, you will get better answers. > >>> > >>> The big takeaway, which you clearly already know, is to never trust > >>> anything ChatGPT produces. Use it as a rough draft, but verify all the > >>> facts. > >>> > >>> My fear is that there are going to be a lot of people who aren't as > >>> diligent, and we'll end up with a lot of made up information out on the > >>> web adding to all of the maliciously bad information that is already > >>> out > >>> there. I have read that the ChatGPT researchers are worried about how > >>> to avoid using ChatGPT's output as input to a later ChatGPT model, and > >>> I > >>> have no idea how they are going to achieve that! > >>> > >>> And keep in mind that that maliciously bad information *is part of > >>> ChatGPT's data set*. Some of it the people who did the training will > >>> have > >>> caught, but I'm willing to bet they missed a lot of it because *they* > >>> didn't know it was bad, or it never came up during training. > >>> > >>> --David > >>> > >>> On Fri, 10 Mar 2023 03:14:21 +0000, Marcia Yudkin via Hidden-discuss > >>> wrote: > >>>> Yes, I know that people have been pointing out "ridiculous factual > >>>> errors" from ChatGPT. However, to make up sources that sound > >>>> completely plausible but are fake seems like it belongs in a whole > >>>> other category. > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> On Thursday, March 9, 2023 at 04:10:43 PM HST, Alan Frank > >>>> wrote: > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> ChatGPT is a conversation engine, not a search engine. It is designed > >>>> to provide plausible responses based on similarity of questions and > >>>> answers to existing material on the internet, without attempting to > >>>> correlate its responses with actual facts. Pretty much every social > >>>> media space I follow has had multiple posts from people pointing out > >>>> ridiculous factual errors from ChatGPT. > >>>> > >>>> --Alan > >>>> > >>>> > >>>> -------- Original Message -------- > >>>> Subject: [Hidden-tech] Question about ChatGPT and machine learning > >>>> Date: 2023-03-09 15:29 > >>>> From: Marcia Yudkin via Hidden-discuss > >>>> > >>>> To: "Hidden-discuss at lists.hidden-tech.net" > >>>> > >>>> > >>>> This question is for anyone who understands how the machine learning > >>>> in > >>>> ChatGPT works. > >>>> > >>>> I've been finding ChatGPT useful for summarizing information that is > >>>> widely dispersed around the web, such as questions like "what are the > >>>> most popular objections to X?" However, the other day for a blog > post > >>>> I > >>>> was writing I asked it "What are some sources on the relationship of X > >>>> to Y?" It gave me four sources of information, including the article > >>>> title, where it was published and who wrote it. > >>>> > >>>> This looked great, especially since I recognized two of the author > >>>> names > >>>> as authorities on X. However, when I then did a Google search, I > >>>> could > >>>> not track down any of the four articles, either by title, author or > >>>> place of publication. I tried both in Google and in Bing. Zilch! > >>>> > >>>> Could ChatGPT have totally made up these sources? If so, how does > >>>> that > >>>> work? > >>>> > >>>> I am baffled about the explanation of this. One of the publications > >>>> involved was Psychology Today, so we are not talking about obscure > >>>> corners of the Internet or sites that would have disappeared recently. > >>>> > >>>> Thanks for any insights. > >>>> > >>>> Marcia Yudkin > >>>> Introvert UpThink > >>>> Introvert UpThink | Marcia Yudkin | Substack > >>>> > >>>> > >>>> > >>>> > >>>> > >>>> Introvert UpThink | Marcia Yudkin | Substack > >>>> Marcia Yudkin > >>>> Exploring how introverts are misunderstood, maligned and > >>>> underappreciated in our culture - yet still thrive. Cli... > >>>> > >>>> > >>>> _______________________________________________ > >>>> Hidden-discuss mailing list - home page: http://www.hidden-tech.net > >>>> Hidden-discuss at lists.hidden-tech.net > >>>> > >>>> You are receiving this because you are on the Hidden-Tech Discussion > >>>> list. > >>>> If you would like to change your list preferences, Go to the Members > >>>> page on the Hidden Tech Web site. > >>>> http://www.hidden-tech.net/members > >>>> _______________________________________________ > >>>> Hidden-discuss mailing list - home page: http://www.hidden-tech.net > >>>> Hidden-discuss at lists.hidden-tech.net > >>>> > >>>> You are receiving this because you are on the Hidden-Tech Discussion > >>>> list. > >>>> If you would like to change your list preferences, Go to the Members > >>>> page on the Hidden Tech Web site. > >>>> http://www.hidden-tech.net/members > >>> _______________________________________________ > >>> Hidden-discuss mailing list - home page: http://www.hidden-tech.net > >>> Hidden-discuss at lists.hidden-tech.net > >>> > >>> You are receiving this because you are on the Hidden-Tech Discussion > >>> list. > >>> If you would like to change your list preferences, Go to the Members > >>> page on the Hidden Tech Web site. > >>> http://www.hidden-tech.net/members > >>> _______________________________________________ > >>> Hidden-discuss mailing list - home page: http://www.hidden-tech.net > >>> Hidden-discuss at lists.hidden-tech.net > >>> > >>> You are receiving this because you are on the Hidden-Tech Discussion > >>> list. > >>> If you would like to change your list preferences, Go to the Members > >>> page on the Hidden Tech Web site. > >>> http://www.hidden-tech.net/members > >> > >> > >> _______________________________________________ > >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net > >> Hidden-discuss at lists.hidden-tech.net > >> > >> You are receiving this because you are on the Hidden-Tech Discussion > >> list. > >> If you would like to change your list preferences, Go to the Members > >> page on the Hidden Tech Web site. > >> http://www.hidden-tech.net/members > >> _______________________________________________ > >> Hidden-discuss mailing list - home page: http://www.hidden-tech.net > >> Hidden-discuss at lists.hidden-tech.net > >> > >> You are receiving this because you are on the Hidden-Tech Discussion > list. > >> If you would like to change your list preferences, Go to the Members > >> page on the Hidden Tech Web site. > >> http://www.hidden-tech.net/members > > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > -- http://timboudreau.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lion at catamount.net Sat Mar 18 03:38:24 2023 From: lion at catamount.net (Catamount) Date: Fri, 17 Mar 2023 23:38:24 -0400 Subject: [Hidden-tech] Aruba ac Access Points Message-ID: <78e87d58-3644-33aa-533c-9ab926bc97b2@catamount.net> Hi folks. I'm looking to get in touch with folk that want more Aruba (HPE) APs for their org. Preferably schools, colleges, state agencies. They preferably should already have established Aruba/HPE infrastructure, to keep it simple. (You need a controller, for these fine fish.) I'm trying to prevent a large number of these things from being destroyed instead of re-used. Some weeks, over 100 stack up. (2016-2018 era models, all 802.11ac. 200 & 300 series. POE, of course.) I have to navigate a couple levels to convince the higher-ups, once some org(s) are potentially found.? Previous efforts to find a re-home have not panned out. But I sincerely believe someone (near or far) will re-home them if we shout from the rooftops. I'd steer it off-list once chatter starts. -- Holleran Greenburger Contractor to Network Operations UMass Amherst IT 413-522-7868 lion at catamount.net hollerang at umass.edu From rdmurray at bitdance.com Sat Mar 18 14:33:21 2023 From: rdmurray at bitdance.com (R. David Murray) Date: Sat, 18 Mar 2023 10:33:21 -0400 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> <1459188877.732266.1679013670900@mail.yahoo.com> Message-ID: <20230318143322.5274BB14004@webabinitio.net> On Fri, 17 Mar 2023 19:16:56 -0400, Tim Boudreau via Hidden-discuss wrote: > That's not to say it's the apocalypse - just a logical consequence. In the > long run, what *would* work is a web where every piece of content is > cryptographically signed, and attributable to an actual human, and handled > by browsers and similar the same way the "green bar" for https is. But > anything like that is years away. I actually put a lot of thought into this a few years back, thinking it might be fun to start working on such a system. It turns out it is a Hard Problem, both because cryptographically signed content is not easy to retrofit into browsers securely, but also because it is impossible to know for certain where the content actually originated. In your example, even if you've found a way to link the key to a verified human, how do you prove the human with the private key actually generated their own content? So what you end up needing is a *reputation* system. Basically you need a truly *distributed* reputation algorithm, and you probably end up with various levels: reputation brokers that have their own reputation, friend networks where you've actually met the person with the key, etc. Analogous to the GPG key signing model, but with a more sophisticated reputation algorithm derived from a continuous stream of ratings made by key holders interacting with other key holders. Those ratings are analogous to client/vendor ratings on amazon and other such marketplaces, but applied to all interactions, not just purchase and sale activity. I contemplated what it would take to build a phone ap for this, but the problem with that is that making a peer to peer connection between phones is hard (no public address), which means you are depending on servers. So, someone could take this on as a business and build the servers, but then you lose the decentralization that is critical to being able to build a trust network instead of having to trust a single business entity (and we all know how well that turns out in the long run). Might have still worked if done as open source. But, it was while going down that trust rabbit hole that I gave up, because I'm lazy and it looked like a *lot* of work and, at least at the time, very little likelihood of any uptake ;) I think the latter is still true...for now. I think if someone actually wants to work on this it would probably be best done as an extension of B2B blockchain technology. I went down *that* rabbit hole for a while, but at the time the software in that space wasn't mature enough to serve as a practical platform for this. I wonder if that has changed... --David From tech at montaguewebworks.com Mon Mar 27 14:22:27 2023 From: tech at montaguewebworks.com (Michael Muller) Date: Mon, 27 Mar 2023 10:22:27 -0400 Subject: [Hidden-tech] Question about ChatGPT and machine learning In-Reply-To: <20230318143322.5274BB14004@webabinitio.net> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> <1459188877.732266.1679013670900@mail.yahoo.com> <20230318143322.5274BB14004@webabinitio.net> Message-ID: <7c09cfe7-9e5f-2752-163e-17a905f8eeb5@montaguewebworks.com> Not deriding your thoughts, Tim, but this sounds very much like what they're building in China, except its also linked to your bank and your doctor and the jails and the government and the government and the government. When you receive a phone call, there is a side message letting you know if the reputation "level" or whatever they call it, is above or below yours. If it's way below yours, you are warned about associating with this person. Hard to tell how this will all shake out. Who knows. We may go back to paper. Mik Mik Muller, president Montague WebWorks 20 River Street, Greenfield, MA 413-320-5336 http://MontagueWebWorks.com Powered by ROCKETFUSION On 3/18/2023 10:33 AM, R. David Murray via Hidden-discuss wrote: > On Fri, 17 Mar 2023 19:16:56 -0400, Tim Boudreau via Hidden-discuss wrote: >> That's not to say it's the apocalypse - just a logical consequence. In the >> long run, what *would* work is a web where every piece of content is >> cryptographically signed, and attributable to an actual human, and handled >> by browsers and similar the same way the "green bar" for https is. But >> anything like that is years away. > I actually put a lot of thought into this a few years back, thinking it > might be fun to start working on such a system. It turns out it is a > Hard Problem, both because cryptographically signed content is not easy > to retrofit into browsers securely, but also because it is impossible to > know for certain where the content actually originated. In your > example, even if you've found a way to link the key to a verified human, > how do you prove the human with the private key actually generated their > own content? > > So what you end up needing is a *reputation* system. Basically you need > a truly *distributed* reputation algorithm, and you probably end up with > various levels: reputation brokers that have their own reputation, > friend networks where you've actually met the person with the key, etc. > Analogous to the GPG key signing model, but with a more sophisticated > reputation algorithm derived from a continuous stream of ratings made by > key holders interacting with other key holders. Those ratings are > analogous to client/vendor ratings on amazon and other such > marketplaces, but applied to all interactions, not just purchase and > sale activity. > > I contemplated what it would take to build a phone ap for this, but the > problem with that is that making a peer to peer connection between > phones is hard (no public address), which means you are depending on > servers. So, someone could take this on as a business and build the > servers, but then you lose the decentralization that is critical to > being able to build a trust network instead of having to trust a single > business entity (and we all know how well that turns out in the long > run). Might have still worked if done as open source. > > But, it was while going down that trust rabbit hole that I gave up, > because I'm lazy and it looked like a *lot* of work and, at least at the > time, very little likelihood of any uptake ;) I think the latter is > still true...for now. > > I think if someone actually wants to work on this it would probably be > best done as an extension of B2B blockchain technology. I went down > *that* rabbit hole for a while, but at the time the software in that > space wasn't mature enough to serve as a practical platform for this. I > wonder if that has changed... > > --David > _______________________________________________ > Hidden-discuss mailing list - home page:http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members -------------- next part -------------- An HTML attachment was scrubbed... URL: From tech at montaguewebworks.com Mon Mar 27 15:11:23 2023 From: tech at montaguewebworks.com (Michael Muller) Date: Mon, 27 Mar 2023 11:11:23 -0400 Subject: [Hidden-tech] Looking for VBA / Access programmer Message-ID: Got this from someone over the weekend. Please reply directly to them. Thanks, Mik --------------------------------- email: gsr01060 at yahoo.com Name: ??? Graham Ridley Comments: ??? Hi, I'm looking for an MS Access/VBA/SQL Server programmer in the valley --------------------------------- -- Mik Muller, president Montague WebWorks 20 River Street, Greenfield, MA 413-320-5336 http://MontagueWebWorks.com Powered by ROCKETFUSION -------------- next part -------------- An HTML attachment was scrubbed... URL: From tech at montaguewebworks.com Tue Mar 28 19:28:15 2023 From: tech at montaguewebworks.com (Michael Muller) Date: Tue, 28 Mar 2023 15:28:15 -0400 Subject: [Hidden-tech] ChatGPT prompt -- I'm impressed In-Reply-To: <7c09cfe7-9e5f-2752-163e-17a905f8eeb5@montaguewebworks.com> References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> <1459188877.732266.1679013670900@mail.yahoo.com> <20230318143322.5274BB14004@webabinitio.net> <7c09cfe7-9e5f-2752-163e-17a905f8eeb5@montaguewebworks.com> Message-ID: Ok, I finally set up an account at chat.openai.com/chat and gave the following prompt: "Write a short story about a Zimbabwe boy child and a Chinese girl child who meet at a festival and become friends." The result was a well written, pretty heart warming story. As a programmer, I can sort of softly peel back a couple of the layers to explain how it came up with this, but not the whole thing. BTW, the Arts festival it lists is real and is happening in four weeks in Zimbabwe, and Tawanda is the 54th most popular name for boys there. Mik ----- *Write a short story about a Zimbabwe boy child and a Chinese girl child who meet at a festival and become friends.* Tawanda had always been fascinated by the colorful posters plastered all over town, announcing the upcoming Harare International Festival of the Arts. He had never been to a festival before, and the idea of seeing music, theater, and dance performances from all over the world made him tingle with excitement. On the first day of the festival, Tawanda found himself wandering through the bustling crowds, his eyes wide with wonder. Everywhere he looked, there were people from all walks of life, dressed in their most colorful and extravagant outfits. As he was making his way through a particularly busy section of the festival, he spotted a young girl standing by herself, gazing up at the towering buildings around her. She looked lost and a little scared, and Tawanda felt a sudden urge to help her. "Hello," he said, approaching her cautiously. "Are you okay?" The girl turned to him and smiled, and Tawanda was struck by her beauty. Her eyes were dark and almond-shaped, and her hair was black and shiny, pulled back into two long braids. She wore a traditional Chinese dress, with a bright red silk skirt and a white blouse adorned with delicate embroidery. "I'm fine," she said in perfect English. "I'm just a little lost. My parents went to see a show, and I got separated from them." Tawanda knew what it was like to feel lost in a crowd. He offered to help the girl look for her parents, and together they started weaving their way through the sea of people. As they walked, Tawanda learned that the girl's name was Mei, and that she was visiting Zimbabwe with her family. As they searched, Tawanda and Mei discovered that they had more in common than they thought. They both loved music, and Tawanda was thrilled to learn that Mei played the erhu, a traditional Chinese instrument. Mei was equally fascinated by Tawanda's stories of Zimbabwean mbira music and dance. Eventually, they found Mei's parents waiting anxiously by the entrance to the main theater. Mei's mother was overjoyed to see her daughter safe and sound, and she thanked Tawanda for his kindness. Tawanda and Mei exchanged phone numbers, promising to stay in touch after the festival was over. For the rest of the festival, Tawanda and Mei hung out together, exploring the different performances and getting to know each other better. They laughed and danced and sang, and Tawanda felt a deep sense of joy and connection that he had never experienced before. He had made a friend from a completely different culture, and it had opened up a whole new world for him. As the festival came to a close, Tawanda felt a twinge of sadness knowing that he and Mei would have to say goodbye. But he also felt a sense of gratitude, for the friendship that had bloomed between them in the most unexpected of places. He knew that he would always treasure the memory of that magical week, and the girl from China who had taught him that no matter where you come from, music and friendship can bring people together. ----- Mik Muller, president Montague WebWorks 20 River Street, Greenfield, MA 413-320-5336 http://MontagueWebWorks.com Powered by ROCKETFUSION -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtblechner at gmail.com Wed Mar 29 03:54:14 2023 From: rtblechner at gmail.com (Ron Blechner) Date: Tue, 28 Mar 2023 23:54:14 -0400 Subject: [Hidden-tech] ChatGPT prompt -- I'm impressed In-Reply-To: References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> <1459188877.732266.1679013670900@mail.yahoo.com> <20230318143322.5274BB14004@webabinitio.net> <7c09cfe7-9e5f-2752-163e-17a905f8eeb5@montaguewebworks.com> Message-ID: When you repeat requests for stories, you'll notice it uses the same formula, same length, same basic way it describes people. Ask it ten times for conversations / meetings between 2 different people, and the patterns will be obvious. The first-glance impressiveness gives way to "oh, ChatGPT is super formulaic". Basically it's "Oh, this user gave me a prompt, let me go and search for the most basic and popular things I know about this prompt, and regurgitate it into a predictable format". It's still more impressive than chat AI from 5 or 10 years ago, but it has a ways to go. -Ron Blechner On Tue, Mar 28, 2023 at 11:40?PM Michael Muller via Hidden-discuss < hidden-discuss at lists.hidden-tech.net> wrote: > Ok, I finally set up an account at chat.openai.com/chat and gave the > following prompt: "Write a short story about a Zimbabwe boy child and a > Chinese girl child who meet at a festival and become friends." The result > was a well written, pretty heart warming story. > > As a programmer, I can sort of softly peel back a couple of the layers to > explain how it came up with this, but not the whole thing. > > BTW, the Arts festival it lists is real and is happening in four weeks in > Zimbabwe, and Tawanda is the 54th most popular name for boys there. > > Mik > > ----- > > *Write a short story about a Zimbabwe boy child and a Chinese girl child > who meet at a festival and become friends.* > > Tawanda had always been fascinated by the colorful posters plastered all > over town, announcing the upcoming Harare International Festival of the > Arts. He had never been to a festival before, and the idea of seeing music, > theater, and dance performances from all over the world made him tingle > with excitement. > > On the first day of the festival, Tawanda found himself wandering through > the bustling crowds, his eyes wide with wonder. Everywhere he looked, there > were people from all walks of life, dressed in their most colorful and > extravagant outfits. As he was making his way through a particularly busy > section of the festival, he spotted a young girl standing by herself, > gazing up at the towering buildings around her. She looked lost and a > little scared, and Tawanda felt a sudden urge to help her. > > "Hello," he said, approaching her cautiously. "Are you okay?" > > The girl turned to him and smiled, and Tawanda was struck by her beauty. > Her eyes were dark and almond-shaped, and her hair was black and shiny, > pulled back into two long braids. She wore a traditional Chinese dress, > with a bright red silk skirt and a white blouse adorned with delicate > embroidery. > > "I'm fine," she said in perfect English. "I'm just a little lost. My > parents went to see a show, and I got separated from them." > > Tawanda knew what it was like to feel lost in a crowd. He offered to help > the girl look for her parents, and together they started weaving their way > through the sea of people. As they walked, Tawanda learned that the girl's > name was Mei, and that she was visiting Zimbabwe with her family. > > As they searched, Tawanda and Mei discovered that they had more in common > than they thought. They both loved music, and Tawanda was thrilled to learn > that Mei played the erhu, a traditional Chinese instrument. Mei was equally > fascinated by Tawanda's stories of Zimbabwean mbira music and dance. > > Eventually, they found Mei's parents waiting anxiously by the entrance to > the main theater. Mei's mother was overjoyed to see her daughter safe and > sound, and she thanked Tawanda for his kindness. Tawanda and Mei exchanged > phone numbers, promising to stay in touch after the festival was over. > > For the rest of the festival, Tawanda and Mei hung out together, exploring > the different performances and getting to know each other better. They > laughed and danced and sang, and Tawanda felt a deep sense of joy and > connection that he had never experienced before. He had made a friend from > a completely different culture, and it had opened up a whole new world for > him. > > As the festival came to a close, Tawanda felt a twinge of sadness knowing > that he and Mei would have to say goodbye. But he also felt a sense of > gratitude, for the friendship that had bloomed between them in the most > unexpected of places. He knew that he would always treasure the memory of > that magical week, and the girl from China who had taught him that no > matter where you come from, music and friendship can bring people together. > > ----- > > > Mik Muller, president > Montague WebWorks > 20 River Street, Greenfield, MA > 413-320-5336http://MontagueWebWorks.com > Powered by ROCKETFUSION > > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rich at vibrancetech.com Wed Mar 29 12:13:46 2023 From: rich at vibrancetech.com (Rich Eichacker) Date: Wed, 29 Mar 2023 08:13:46 -0400 Subject: [Hidden-tech] ChatGPT prompt -- I'm impressed In-Reply-To: References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> <1459188877.732266.1679013670900@mail.yahoo.com> <20230318143322.5274BB14004@webabinitio.net> <7c09cfe7-9e5f-2752-163e-17a905f8eeb5@montaguewebworks.com> Message-ID: <000d01d96237$ea3e3910$bebaab30$@vibrancetech.com> This is excellent. A ran your prompt on Bard and here?s what it produced. In my opinion, ChatGPT wrote a way better story. Bard?s is a bit choppy ? just a bunch of sentences lumped together in paragraphs - and nowhere near as descriptive as Chat?s. I gave Bard a thumbs down and told it that Chat wrote a better story. Wonder what it will think about that! It was the first day of the annual cultural festival in Zimbabwe. The streets were filled with people from all over the country, celebrating their different cultures. Among the crowd was a young Zimbabwean boy named Tendai and a young Chinese girl named Mei. Tendai was excited to be at the festival. He had never been to China before, and he was eager to learn more about its culture. Mei was also excited to be at the festival. She had never been to Zimbabwe before, and she was eager to learn more about its culture. Tendai and Mei were both drawn to the same booth, which was selling traditional Chinese food. They both ordered a plate of dumplings, and they started talking. They quickly realized that they had a lot in common. They both loved to learn new things, and they both loved to travel. As they talked, Tendai and Mei became friends. They talked about their families, their schools, and their dreams for the future. They also talked about the festival, and how much they were enjoying it. When it was time to go, Tendai and Mei exchanged contact information. They promised to keep in touch, and they both looked forward to meeting again. Tendai and Mei continued to keep in touch after the festival. They talked on the phone, they sent emails, and they even visited each other in their respective countries. They learned a lot about each other's cultures, and they grew even closer as friends. One day, Tendai and Mei decided to take a trip together. They went to China, and they had a wonderful time exploring the country. They visited many different cities, and they learned a lot about Chinese history and culture. Tendai and Mei's friendship continued to grow, and they became even closer. They were both very happy that they had met each other, and they were both grateful for the friendship that they shared. ____________________________________________________________ Rich Eichacker, President Vibrance Technology Corp. 866-VIBRANCE (842-7262) www.vibrancetech.com Software and Database Development (and satisfied customers) since 1995. ___________________________________________________________ From: Hidden-discuss On Behalf Of Michael Muller via Hidden-discuss Sent: Tuesday, March 28, 2023 03:28 PM To: hidden-discuss Subject: [Hidden-tech] ChatGPT prompt -- I'm impressed Ok, I finally set up an account at chat.openai.com/chat and gave the following prompt: "Write a short story about a Zimbabwe boy child and a Chinese girl child who meet at a festival and become friends." The result was a well written, pretty heart warming story. As a programmer, I can sort of softly peel back a couple of the layers to explain how it came up with this, but not the whole thing. BTW, the Arts festival it lists is real and is happening in four weeks in Zimbabwe, and Tawanda is the 54th most popular name for boys there. Mik ----- Write a short story about a Zimbabwe boy child and a Chinese girl child who meet at a festival and become friends. Tawanda had always been fascinated by the colorful posters plastered all over town, announcing the upcoming Harare International Festival of the Arts. He had never been to a festival before, and the idea of seeing music, theater, and dance performances from all over the world made him tingle with excitement. On the first day of the festival, Tawanda found himself wandering through the bustling crowds, his eyes wide with wonder. Everywhere he looked, there were people from all walks of life, dressed in their most colorful and extravagant outfits. As he was making his way through a particularly busy section of the festival, he spotted a young girl standing by herself, gazing up at the towering buildings around her. She looked lost and a little scared, and Tawanda felt a sudden urge to help her. "Hello," he said, approaching her cautiously. "Are you okay?" The girl turned to him and smiled, and Tawanda was struck by her beauty. Her eyes were dark and almond-shaped, and her hair was black and shiny, pulled back into two long braids. She wore a traditional Chinese dress, with a bright red silk skirt and a white blouse adorned with delicate embroidery. "I'm fine," she said in perfect English. "I'm just a little lost. My parents went to see a show, and I got separated from them." Tawanda knew what it was like to feel lost in a crowd. He offered to help the girl look for her parents, and together they started weaving their way through the sea of people. As they walked, Tawanda learned that the girl's name was Mei, and that she was visiting Zimbabwe with her family. As they searched, Tawanda and Mei discovered that they had more in common than they thought. They both loved music, and Tawanda was thrilled to learn that Mei played the erhu, a traditional Chinese instrument. Mei was equally fascinated by Tawanda's stories of Zimbabwean mbira music and dance. Eventually, they found Mei's parents waiting anxiously by the entrance to the main theater. Mei's mother was overjoyed to see her daughter safe and sound, and she thanked Tawanda for his kindness. Tawanda and Mei exchanged phone numbers, promising to stay in touch after the festival was over. For the rest of the festival, Tawanda and Mei hung out together, exploring the different performances and getting to know each other better. They laughed and danced and sang, and Tawanda felt a deep sense of joy and connection that he had never experienced before. He had made a friend from a completely different culture, and it had opened up a whole new world for him. As the festival came to a close, Tawanda felt a twinge of sadness knowing that he and Mei would have to say goodbye. But he also felt a sense of gratitude, for the friendship that had bloomed between them in the most unexpected of places. He knew that he would always treasure the memory of that magical week, and the girl from China who had taught him that no matter where you come from, music and friendship can bring people together. ----- Mik Muller, president Montague WebWorks 20 River Street, Greenfield, MA 413-320-5336 http://MontagueWebWorks.com Powered by ROCKETFUSION -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.gif Type: image/gif Size: 277497 bytes Desc: not available URL: From rob at 2disc.com Wed Mar 29 15:46:09 2023 From: rob at 2disc.com (Rob Laporte) Date: Wed, 29 Mar 2023 11:46:09 -0400 Subject: [Hidden-tech] ChatGPT prompt -- I'm impressed In-Reply-To: References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> <1459188877.732266.1679013670900@mail.yahoo.com> <20230318143322.5274BB14004@webabinitio.net> <7c09cfe7-9e5f-2752-163e-17a905f8eeb5@montaguewebworks.com> Message-ID: <1872e0b5c4d.e626e01683271.1388031264933995277@2disc.com> I'm sure most will get today's news about Musk and Apple co-founder Steve Wozniak and many AI heavyweights calling for a 6-month moratorium on Chatgpt and other such AI:?https://futureoflife.org/open-letter/pause-giant-ai-experiments/. I'm working to get a vision of just how profoundly and quickly such AI will change our work and non-work lives. I have a lot more such work to go. My guess is this will prove more transformative than the PC and the internet, or at least on par. For example, in my mere several hours use of ChatGPT, it has already saved me, a friend, and my firm ~$2500 in professional fees; for one client likely $5000-$10,000 saved in graphic design (which my firm doesnt do); another client ~$10,000 by expediting writing of famous and semi-famous artist bios. Sure, the art and writing still takes time when using chatgpt and editing output, but tons of time is saved for quality that is plenty good enough and very cost-effective.?? ? One excerpt: " . . .?recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one ? not even their creators ? can understand, predict, or reliably control." Best Regards, Rob Laporte CEO? |? R&D Manager DISC - Making Web Sites?Make Money mailto:Rob at 2disc.com,?413-584-6500 https://www.2disc.com NOTE:?Emails can be blocked by spam filters throughout the web. If you don?t get a reply within an expected span of time, please call. ---- On Tue, 28 Mar 2023 15:28:15 -0400 Michael Muller via Hidden-discuss wrote --- Ok, I finally set up an account at chat.openai.com/chat and gave the following prompt: "Write a short story about a Zimbabwe boy child and a Chinese girl child who meet at a festival and become friends." The result was a well written, pretty heart warming story. As a programmer, I can sort of softly peel back a couple of the layers to explain how it came up with this, but not the whole thing. BTW, the Arts festival it lists is real and is happening in four weeks in Zimbabwe, and Tawanda is the 54th most popular name for boys there. Mik ----- Write a short story about a Zimbabwe boy child and a Chinese girl child who meet at a festival and become friends. Tawanda had always been fascinated by the colorful posters plastered all over town, announcing the upcoming Harare International Festival of the Arts. He had never been to a festival before, and the idea of seeing music, theater, and dance performances from all over the world made him tingle with excitement. On the first day of the festival, Tawanda found himself wandering through the bustling crowds, his eyes wide with wonder. Everywhere he looked, there were people from all walks of life, dressed in their most colorful and extravagant outfits. As he was making his way through a particularly busy section of the festival, he spotted a young girl standing by herself, gazing up at the towering buildings around her. She looked lost and a little scared, and Tawanda felt a sudden urge to help her. "Hello," he said, approaching her cautiously. "Are you okay?" The girl turned to him and smiled, and Tawanda was struck by her beauty. Her eyes were dark and almond-shaped, and her hair was black and shiny, pulled back into two long braids. She wore a traditional Chinese dress, with a bright red silk skirt and a white blouse adorned with delicate embroidery. "I'm fine," she said in perfect English. "I'm just a little lost. My parents went to see a show, and I got separated from them." Tawanda knew what it was like to feel lost in a crowd. He offered to help the girl look for her parents, and together they started weaving their way through the sea of people. As they walked, Tawanda learned that the girl's name was Mei, and that she was visiting Zimbabwe with her family. As they searched, Tawanda and Mei discovered that they had more in common than they thought. They both loved music, and Tawanda was thrilled to learn that Mei played the erhu, a traditional Chinese instrument. Mei was equally fascinated by Tawanda's stories of Zimbabwean mbira music and dance. Eventually, they found Mei's parents waiting anxiously by the entrance to the main theater. Mei's mother was overjoyed to see her daughter safe and sound, and she thanked Tawanda for his kindness. Tawanda and Mei exchanged phone numbers, promising to stay in touch after the festival was over. For the rest of the festival, Tawanda and Mei hung out together, exploring the different performances and getting to know each other better. They laughed and danced and sang, and Tawanda felt a deep sense of joy and connection that he had never experienced before. He had made a friend from a completely different culture, and it had opened up a whole new world for him. As the festival came to a close, Tawanda felt a twinge of sadness knowing that he and Mei would have to say goodbye. But he also felt a sense of gratitude, for the friendship that had bloomed between them in the most unexpected of places. He knew that he would always treasure the memory of that magical week, and the girl from China who had taught him that no matter where you come from, music and friendship can bring people together. ----- Mik Muller, president Montague WebWorks 20 River Street, Greenfield, MA 413-320-5336 http://MontagueWebWorks.com Powered by ROCKETFUSION _______________________________________________ Hidden-discuss mailing list - home page: http://www.hidden-tech.net Hidden-discuss at lists.hidden-tech.net You are receiving this because you are on the Hidden-Tech Discussion list. If you would like to change your list preferences, Go to the Members page on the Hidden Tech Web site. http://www.hidden-tech.net/members -------------- next part -------------- An HTML attachment was scrubbed... URL: From krystyobolyte at gmail.com Thu Mar 30 03:46:21 2023 From: krystyobolyte at gmail.com (Kristi) Date: Wed, 29 Mar 2023 23:46:21 -0400 Subject: [Hidden-tech] ChatGPT prompt -- I'm impressed In-Reply-To: References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> <1459188877.732266.1679013670900@mail.yahoo.com> <20230318143322.5274BB14004@webabinitio.net> <7c09cfe7-9e5f-2752-163e-17a905f8eeb5@montaguewebworks.com> Message-ID: this is very sweet. I've experimented with it for legal blog post topics and have been quite impressed. Kristi A. Bodin Montague, MA c: 413-695-9848 krystyobolyte at gmail.com http://www.linkedin.com/in/kristibodin https://twitter.com/KABESQ http://KABESQ.com Legal Solutions in Plain English On Tue, Mar 28, 2023 at 11:41?PM Michael Muller via Hidden-discuss < hidden-discuss at lists.hidden-tech.net> wrote: > Ok, I finally set up an account at chat.openai.com/chat and gave the > following prompt: "Write a short story about a Zimbabwe boy child and a > Chinese girl child who meet at a festival and become friends." The result > was a well written, pretty heart warming story. > > As a programmer, I can sort of softly peel back a couple of the layers to > explain how it came up with this, but not the whole thing. > > BTW, the Arts festival it lists is real and is happening in four weeks in > Zimbabwe, and Tawanda is the 54th most popular name for boys there. > > Mik > > ----- > > *Write a short story about a Zimbabwe boy child and a Chinese girl child > who meet at a festival and become friends.* > > Tawanda had always been fascinated by the colorful posters plastered all > over town, announcing the upcoming Harare International Festival of the > Arts. He had never been to a festival before, and the idea of seeing music, > theater, and dance performances from all over the world made him tingle > with excitement. > > On the first day of the festival, Tawanda found himself wandering through > the bustling crowds, his eyes wide with wonder. Everywhere he looked, there > were people from all walks of life, dressed in their most colorful and > extravagant outfits. As he was making his way through a particularly busy > section of the festival, he spotted a young girl standing by herself, > gazing up at the towering buildings around her. She looked lost and a > little scared, and Tawanda felt a sudden urge to help her. > > "Hello," he said, approaching her cautiously. "Are you okay?" > > The girl turned to him and smiled, and Tawanda was struck by her beauty. > Her eyes were dark and almond-shaped, and her hair was black and shiny, > pulled back into two long braids. She wore a traditional Chinese dress, > with a bright red silk skirt and a white blouse adorned with delicate > embroidery. > > "I'm fine," she said in perfect English. "I'm just a little lost. My > parents went to see a show, and I got separated from them." > > Tawanda knew what it was like to feel lost in a crowd. He offered to help > the girl look for her parents, and together they started weaving their way > through the sea of people. As they walked, Tawanda learned that the girl's > name was Mei, and that she was visiting Zimbabwe with her family. > > As they searched, Tawanda and Mei discovered that they had more in common > than they thought. They both loved music, and Tawanda was thrilled to learn > that Mei played the erhu, a traditional Chinese instrument. Mei was equally > fascinated by Tawanda's stories of Zimbabwean mbira music and dance. > > Eventually, they found Mei's parents waiting anxiously by the entrance to > the main theater. Mei's mother was overjoyed to see her daughter safe and > sound, and she thanked Tawanda for his kindness. Tawanda and Mei exchanged > phone numbers, promising to stay in touch after the festival was over. > > For the rest of the festival, Tawanda and Mei hung out together, exploring > the different performances and getting to know each other better. They > laughed and danced and sang, and Tawanda felt a deep sense of joy and > connection that he had never experienced before. He had made a friend from > a completely different culture, and it had opened up a whole new world for > him. > > As the festival came to a close, Tawanda felt a twinge of sadness knowing > that he and Mei would have to say goodbye. But he also felt a sense of > gratitude, for the friendship that had bloomed between them in the most > unexpected of places. He knew that he would always treasure the memory of > that magical week, and the girl from China who had taught him that no > matter where you come from, music and friendship can bring people together. > > ----- > > > Mik Muller, president > Montague WebWorks > 20 River Street, Greenfield, MA > 413-320-5336http://MontagueWebWorks.com > Powered by ROCKETFUSION > > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davidfo at gmail.com Thu Mar 30 13:07:42 2023 From: davidfo at gmail.com (David O) Date: Thu, 30 Mar 2023 09:07:42 -0400 Subject: [Hidden-tech] ChatGPT prompt -- I'm impressed In-Reply-To: References: <196290060.971490.1678393776863.ref@mail.yahoo.com> <196290060.971490.1678393776863@mail.yahoo.com> <0e1aca9836279aa0bea11a30627ea97c@8wheels.org> <62601017.1050433.1678418061515@mail.yahoo.com> <20230310150355.CAE43B1401C@webabinitio.net> <1714938371.1314807.1678498586187@mail.yahoo.com> <186d2527896.cb18e4c83946497.8031613139932413279@2disc.com> <697951220.1424386.1678568135796@mail.yahoo.com> <1077385726.1475766.1678589145074@mail.yahoo.com> <186e08e3703.c09f7c874583173.1890424310496816823@2disc.com> <1459188877.732266.1679013670900@mail.yahoo.com> <20230318143322.5274BB14004@webabinitio.net> <7c09cfe7-9e5f-2752-163e-17a905f8eeb5@montaguewebworks.com> Message-ID: So I went to ChatGPT (PLUS) and said: Take this story and rewrite as an illustrated children's book in the style of Dr. Seuss. *ChatGPT:* Title: Tawanda and Mei's Zibber-Zabber Adventure Page 1: (Illustration of Tawanda looking at a poster of the Harare International Festival of the Arts) In the town of Harare, both day and night, Young Tawanda dreamed of colorful sights. The posters displayed all over the town, Made his heart flutter, with joy unbound. Page 2: (Illustration of Tawanda excitedly walking through the bustling festival) The festival came and this little boy, Wandered the streets filled with lively joy. He saw people in colors, dancing and bright, His eyes opened wide to this wondrous sight. Page 3-4: (Illustration of Tawanda spotting Mei in the crowd and approaching her) He wandered and gazed, and what did he see? A girl looking lost, as lost as can be. Her eyes like the night, her hair shining black, Tawanda stepped forth, there was no turning back. "Hello there, dear friend," he kindly inquired, "Are you alright, or help is required?" She turned with a smile, just as pretty as day, In a dress of deep red and a blouse on display. Page 5-7: (Illustration of Mei explaining her situation to Tawanda and them walking together, talking about music) "My name is Mei, and I'm lost, it is true, My parents are gone, and I don't know what to do." So Tawanda took her hand, and they searched high and low, As they bonded through music, their friendship did grow. With an erhu from China and an mbira too, The music they shared, it was something so new. They laughed and they danced, and their spirits took flight, In this Zibber-Zabber land, where music's alight. Page 8-10: (Illustration of Tawanda and Mei finding Mei's parents, exchanging phone numbers and exploring the festival together) Soon enough, they found Mei's parents so dear, Tears of joy glistened, and the end seemed so near. With a promise to stay in touch, numbers exchanged, Tawanda and Mei explored, friendships rearranged. In the world of the Harare, so colorful and grand, They discovered more music - oh, it was so grand! This little boy and girl, from worlds far apart, United through music, a friendship to start. Page 11-12: (Illustration of Tawanda and Mei hugging goodbye and waving farewell) But as all good things must come to an end, So too did the festival, for Tawanda, his friend. With a hug and a wave, they said their goodbyes, Their hearts filled with memories, under African skies. So remember young Tawanda, and Mei of the East, In a world filled with music, they shared quite a feast. A lesson they learned, as the world seemed to dance, Nothing unites quite like music and second chances. The End. *ME:* Not bad, but let's try it in blank verse using Shakespearean English. *ChatGPT: * Title: The Harmonious Tale of Tawanda and Mei Page 1: (Illustration of Tawanda looking at a poster of the Harare International Festival of the Arts) In fair Harare, where our scene unveils, Lies Tawanda, dreaming in his humble bed, Of music, dance, and theatre so grand, Whilst posters brightly painted fill his head. Page 2: (Illustration of Tawanda excitedly walking through the bustling festival) Anon the festival begins with mirth, Young Tawanda doth set foot and seek, To witness festivities from diverse lands, In fervent hope, performances unique. Page 3-4: (Illustration of Tawanda spotting Mei in the crowd and approaching her) In midst of merriment and joyous dance, He spies a damsel lost and forlorn, Her eyes as dark and deep as silent night, Within her braids black strands of silk are worn. "Good morrow, lady fair, canst thou be well?" Young Tawanda softly inquires, Her gaze now met, she graciously smiles, In dress of crimson red and frock attires. Page 5-7: (Illustration of Mei explaining her situation to Tawanda and them walking together, talking about music) "Alas, I'm lost," the gentle maid doth speak, "My name is Mei, and family I seek." Quoth Tawanda, "Fear not, I shall assist," And hand in hand, they searched as the music persist. In their quest for kinsmen, they did converse, Of melodious tales and love for art, >From China's erhu to Zimbabwe's mbira, Their shared passions ignited a friendship to start. Page 8-10: (Illustration of Tawanda and Mei finding Mei's parents, exchanging information, and exploring the festival together) At length, they found the guardians of sweet Mei, United in relief and boundless cheer, With promises of future contact made, Our friends emerged to celebrate good cheer. Varied arts of wondrous skills they saw, >From farthest corners of the Earth displayed, In Harare's festival they danced and sang, As Tawanda and Mei's friendship further weighed. Page 11-12: (Illustration of Tawanda and Mei hugging goodbye and waving farewell) Alas, the revelry would soon be o'er, Their parting filled with tender, bittersweet, The newfound comrades hugged and waved adieu, Their story etched in memory, complete. For in this tale of Tawanda and Mei, A truth is found, a lesson to convey, In friendship forged through rhythm and sweet notes, We find our bond, though from the farthest bay. The End. On Tue, Mar 28, 2023 at 11:40?PM Michael Muller via Hidden-discuss < hidden-discuss at lists.hidden-tech.net> wrote: > Ok, I finally set up an account at chat.openai.com/chat and gave the > following prompt: "Write a short story about a Zimbabwe boy child and a > Chinese girl child who meet at a festival and become friends." The result > was a well written, pretty heart warming story. > > As a programmer, I can sort of softly peel back a couple of the layers to > explain how it came up with this, but not the whole thing. > > BTW, the Arts festival it lists is real and is happening in four weeks in > Zimbabwe, and Tawanda is the 54th most popular name for boys there. > > Mik > > ----- > > *Write a short story about a Zimbabwe boy child and a Chinese girl child > who meet at a festival and become friends.* > > Tawanda had always been fascinated by the colorful posters plastered all > over town, announcing the upcoming Harare International Festival of the > Arts. He had never been to a festival before, and the idea of seeing music, > theater, and dance performances from all over the world made him tingle > with excitement. > > On the first day of the festival, Tawanda found himself wandering through > the bustling crowds, his eyes wide with wonder. Everywhere he looked, there > were people from all walks of life, dressed in their most colorful and > extravagant outfits. As he was making his way through a particularly busy > section of the festival, he spotted a young girl standing by herself, > gazing up at the towering buildings around her. She looked lost and a > little scared, and Tawanda felt a sudden urge to help her. > > "Hello," he said, approaching her cautiously. "Are you okay?" > > The girl turned to him and smiled, and Tawanda was struck by her beauty. > Her eyes were dark and almond-shaped, and her hair was black and shiny, > pulled back into two long braids. She wore a traditional Chinese dress, > with a bright red silk skirt and a white blouse adorned with delicate > embroidery. > > "I'm fine," she said in perfect English. "I'm just a little lost. My > parents went to see a show, and I got separated from them." > > Tawanda knew what it was like to feel lost in a crowd. He offered to help > the girl look for her parents, and together they started weaving their way > through the sea of people. As they walked, Tawanda learned that the girl's > name was Mei, and that she was visiting Zimbabwe with her family. > > As they searched, Tawanda and Mei discovered that they had more in common > than they thought. They both loved music, and Tawanda was thrilled to learn > that Mei played the erhu, a traditional Chinese instrument. Mei was equally > fascinated by Tawanda's stories of Zimbabwean mbira music and dance. > > Eventually, they found Mei's parents waiting anxiously by the entrance to > the main theater. Mei's mother was overjoyed to see her daughter safe and > sound, and she thanked Tawanda for his kindness. Tawanda and Mei exchanged > phone numbers, promising to stay in touch after the festival was over. > > For the rest of the festival, Tawanda and Mei hung out together, exploring > the different performances and getting to know each other better. They > laughed and danced and sang, and Tawanda felt a deep sense of joy and > connection that he had never experienced before. He had made a friend from > a completely different culture, and it had opened up a whole new world for > him. > > As the festival came to a close, Tawanda felt a twinge of sadness knowing > that he and Mei would have to say goodbye. But he also felt a sense of > gratitude, for the friendship that had bloomed between them in the most > unexpected of places. He knew that he would always treasure the memory of > that magical week, and the girl from China who had taught him that no > matter where you come from, music and friendship can bring people together. > > ----- > > > Mik Muller, president > Montague WebWorks > 20 River Street, Greenfield, MA > 413-320-5336http://MontagueWebWorks.com > Powered by ROCKETFUSION > > > _______________________________________________ > Hidden-discuss mailing list - home page: http://www.hidden-tech.net > Hidden-discuss at lists.hidden-tech.net > > You are receiving this because you are on the Hidden-Tech Discussion list. > If you would like to change your list preferences, Go to the Members > page on the Hidden Tech Web site. > http://www.hidden-tech.net/members > -------------- next part -------------- An HTML attachment was scrubbed... URL: