top of page
Search

Not another boring conversation about the future of email marketing and the rise of ChatGPT and AI



Listen to the episode above, or read the transcript below (as provided by Otter.ai - affiliated).

For fun and giggles, and despite my own strive for accuracy, I didn't correct the transcript - so here it is with errors and all. How's that for AI taking over, eh?


Here are the highlights of the conversation, as provided by Otter:

  • Introduction to today’s guest.

  • What is an ethical email strategist and copywriter?

  • How would you describe a sleazy email ad vs one that has been thought out in the manner that you are working with your clients?

  • Is it unethical to use AI to write about things you know nothing about?

  • Using AI for different things –.

  • Who are we trusting? Who are we consuming content through? Who are we consuming content through?

  • Why it’s important to close the curiosity gap.

  • How does AI have the potential to erode trust in science?

  • The breach of trust that comes with using AI.

  • The speed of AI is overwhelming, but people like doing business with other people.

  • There’s no replacement for understanding human psychology.


Personally, I'd provide more juicy highlights. But let's stick to our theme here.


Let me know in the comments below - what were your personal highlights from this episode?




Read the full transcript:


Scott Sereboff

Hey everybody, welcome to privacy of me. I am again very excited as today we're going to be having a discussion with Yuval Ackerman. Yuval is in noted ethical email strategist and copywriter who does a lot of work in the E commerce field with personal brands, creating human first experiences that stand out in our oversaturated inboxes. She's also a songwriter. She has her own podcast called Loving against my instincts. And as a famous lover of all things pasta oriented. I'm very excited to have her on privacy of me, where we're going to spend some time today talking about ethics, AI, and writing. So without any further ado, let's get to it.


So, everybody, I'd like to introduce Yuval Ackerman, who is our guest today on privacy of me. I'm going to immediately ask you Well, you've all Why don't you tell us a little about yourself and your sort of generalized opinion about this, this whole artificial intelligence thing?


Yuval Ackerman

Sure, Scott. First of all, thank you so much for having me. I will quickly introduce myself, I'm an ethical email strategist and copywriter. I'm a digital nomad as of lately. And I'm working both with E COMM And DTC brands, as well as personal brands to basically do email marketing, without the sleaze with more fun, more connection. And everything in between. My stance on AI, you know, a lot of my friends and colleagues in the copywriting and strategy room have been talking about AI in so many different ways, especially the old debate of will it replace us or would it take our jobs? And honestly, I think AI when done properly, like pretty much everything in life is a great tool to help us rather than replace us. So I am not against AI in any in any way. But then again, as always, the question is, how do we use it? How do we share the fact that we use AI? Yeah, just the whole how and and when not necessarily the what?


Scott Sereboff

What is an ethical email strategist,


Yuval Ackerman

basically, and that's something that I'm seeing quite a lot recently is that I'm working with brands who don't see their subscribers as walking wallets first, but as humans first. So that means that we are working on building and nurturing relationships and connection rather than trying to sell immediately. I mean, don't get me wrong, all brands of all kinds want to sell their products, their services, their solutions, but we have to remember that there are people on the other end on the receiving end. And those people more than ever want connection. They want to have a conversation about the things that bother them, and the things that they need help with. And we are here as brand owners or service providers to supply them with the solution. We are not gods we're not above anyone else, we just thought of a solution, or tweaked a solution better or differently, or came up with something completely new.

Scott Sereboff

You said something that made me laugh. sleazy was the word use? What would you how would you describe a sleazy email ad versus one that has been thought out in the manner that you're working with your clients on and this sort of ethical approach?


Yuval Ackerman

I just wrote about it. So my standards are different than others. And that's fine. I think as long as we have a discussion about what what is ethical and what is not, then we're already in the green zone. But for example, according to my values and my standards, bombarding your subscribers and telling them that your cell is about to end and then a day later surprise, we extended the sale for you, which is a huge lie. That to me is unethical and sleazy because if you're bombarding your subscribers and telling them you have to buy now before the coupon code or whatever it is, is expiring and then you're saying And then again, we want more sales because let's let's face it, that is the messaging. Yeah, it is the the end game after all, that you're not being transparent and you're not being ethical whatsoever. That's That's my opinion. But hey, it's best practice. So it must work for everyone. Right?


Scott Sereboff

Well, all but you said and I think correctly, so not to drift too far off topic, but all businesses at the end, want to sell whatever their product or products are. And you couldn't make a living any more than I could if people weren't buying products. But there's a difference between the the bombardment by now by now by now, and the consultative approach of, well, maybe now's not the right time for you to buy, let's look at the options and create a relationship that leads to a purchase, rather than just trying to get the purchase. One is much more of a scorched earth approach than the other. But we digress. How do I tell it's you that wrote the email? In a world where AI is becoming ever more prevalent. And you're seeing you know, we you and I talked offline about chat, GPT, and three, and fours coming out. And one, one of the things that we talked about was the identification of authorship. Right? Now, privacy of me is fundamentally a video and biometrics based podcast, or at least was supposed to be, but as I have come to realize, all facets of AI, count, audio, video, etc. So how do I know it's you? How do I know today? That you've all Endor? You've always customer wrote that email versus an AI?


Yuval Ackerman

That's a very good question. And honestly, I'm not sure if you can distinguish between an AI version or my version, if we're looking at a standalone email, right? Because I think a very generic sales email with the multitude of emails out there, if you feed an AI the algorithm properly. I think that if you're talking about very straightforward sales email, you wouldn't be able to tell, however, is it?

Scott Sereboff

Is it important to be able to tell?


Yuval Ackerman

Is it I'm not entirely sure anymore, because at least from what I'm seeing in the wild with my the competitors of the brands that I'm working with, they are doing all of quite the same thing. So basically, there isn't much innovation, and there isn't much intrigue or interest. I would go as far as to say, and this is me being my radical self, that most of email marketing is done badly today. And it's very, very boring. And this is a massive overgeneralization, right. But I think that brands that work with me see the value in changing that. And so their sales emails are not something or or, you know, the good kind of email marketing, their sales emails, you wouldn't even they wouldn't even come across the sales emails. Right? Even though every email is a sales email. But I think this question of yours, Scott, could be dealt with much more sophisticatedly if we talked about a series of emails and not a standalone email. Because if you are launching something, there is a way to strategize that launch in a humane way that I think AI is still not there.


Scott Sereboff

But it almost sounds like what you're you're saying, from your standpoint, if the email is written in the ethical format you spoke about earlier, then you don't really care who wrote it. The The end result is where you're focusing your attention. Would you say that is correct?


Yuval Ackerman

Partially, because there's also a way of doing things or communicating things. That I think both AI and human beings sometimes get completely wrong. And that's the whole aspect of the human connection and human psychology and how do we use those things to our benefit, but also to our readers or subscribers benefit?


Scott Sereboff

So I've heard people say in writing, that one of the things they're afraid of is for example, I can't cook. I can make eggs. I think everyone can make it But I can't cook. But it doesn't stop me from writing a cookbook. Because I can use AI to help me author a book about a subject I know nothing about personally, it seems immoral to me to try to make money off of something like that, that you yourself know nothing about. Right. So where do you where do you stand? And where do you think we should be? In regards that sort of thing? Do we? If we consider ourselves writers? Do we have an obligation to tell the public? Okay, this is an AI generated novel cookbook, whatever or not?

Yuval Ackerman

I think so. Absolutely. Just, I think that AI or not, there are people out there who are writing books about things they have nothing about, have no knowledge about, regardless of AI. I think that's unethical, too. But I think, especially with AI and more and more brands using AI, I think, to me, it makes a whole lot of sense for them to say, Hey, listen, we have used AI to create whichever piece of promotional assets that you're seeing


Scott Sereboff

is, is is AI, a crutch? Is AI something that can be hidden behind? In terms of this sort of subject? You know, if you're not a great writer, can it help you if you don't have expertise that can help you? Is that where there's the morality and potential downfall issue? That it just you can hide behind it?


Yuval Ackerman

It could be. I do know a lot of writers myself that are already using AI as a way as a crutch, but not in the same way. But they're using AI to either generate ideas, or parts of the content or the copy that they're writing. But then they tweak it to their needs. But those writers and I'm again, generalizing here, those writers actually say, Yes, I use AI for ideas, I use it for, you know, writing, you know, a long piece of content that maybe I don't have the energy to write, but they see it. And they they're not ashamed of mentioning that. And I think that if we're talking strictly ethically, that's what every brands out there who is even considering using AI for such things need to consider.

Scott Sereboff

When when I rebuilt the corporate website for our company, I used writer access, which is a human service you, you hire freelance writers. And when we launched the website, there were 15 or 16 articles. So never once did it occur to me to annotate. Okay, I only wrote one of these, I only wrote two of these. I wonder if I would have handled that differently. If I had used AI to write all of those articles. It almost the point that I'm driving as it seems like the AI part of it is what creates the difference? In other words, oh, you should tell people that it's AI. But if it's fellow humans, and hired a bunch of freelancers to do, it doesn't seem to carry the same moral weight, if you will, that AI does.


Yuval Ackerman

Yes, and no, because I think you know, as a business owner wants us to call herself a freelance, or a freelancer. Having not having my name on something that I worked on is is a punch in the gut. I think that if we're talking about credits, giving credits to either a human being who works on a piece, or AI, that to me is kind of the same.


Scott Sereboff

Interesting, you said something a few seconds ago where you've used AI. So tell me, how the writer how does the writer of today take advantage of AI?

Yuval Ackerman

So for full disclosure, I haven't used chat GPT yet, it's on my to do list. But using AI, I am using AI for different things. You know, AI is a whole you know, range of services that I'm using from a transcription service that I'm using almost on a daily basis and shorten my you know, transcription time and writing time by a lot. It could be from generating ideas. I'm specifically I don't use the service, but there's a service that is being celebrated out there called Taplio, for LinkedIn posts, and that one is very, very successful and really, really, yeah, well thought of and really helps a lot of writers out there err, to generate ideas, analyze ideas, analyze posts and their performance. But yeah, there are so many ways for writers today to, to use AI. It's just a matter of what do you need it to do for you?


Scott Sereboff

Taplio, is it generating, literally generating LinkedIn posts?


Yuval Ackerman

I'm not sure I do know that it generate, it generates ideas for posts, that's I'm pretty sure that I can see that.


Scott Sereboff

We tend to look at AI as, and this is on the positive side. It is a an accelerant. It is a time accelerate, we have 24 hours in a day. And no matter how much you sleep, some of that time is going to be spent sleeping. Some of that time will be spent in whatever your personal day to day things are. What's left that we dedicate whether it's to something we earn money for whatever doesn't matter. AI can help us by adding time back into our day. And a writer can only write, I would assume anyway, one thing at a time, maybe there's some special ones among us who can have three things going simultaneously. But for most of us, it's a one thing at a time kind of approach, research, idea generation, all of these things that you could perhaps suggest are sort of the grunt work. By giving that to an AI, it adds time to your day to do the thing that makes you special for your writer, it's the writing, if you're an artist, it's the painting or the music or whatnot, right? Do you see any part of that as negative?

Yuval Ackerman

If we're talking strictly about the grunt work, I would say that most of it is positive. The part that is negative is the part where I'm going in my mind, oh, wait, this is actually not the kind of information that I needed to research. Or this is not the kind of information that I trust is factual, or unbiased enough. Or there are 1000 different ways to look at it, right. I think what AI presents today is an opportunity for us to get rid of a lot of those daunting tasks, obviously, like research like transcribing, but still, and this is something that I'm noticing myself with the thing with how I'm using AI, I still have to go back and recheck if everything was done properly, because mostly, it's still not 100%. That's why I haven't done research through AI completely yet, because I don't trust is that it will fetch the the sources of information that I need that I'm looking for, and then being you know, the right sources of information. That is why I won't still prefer just what's the word that we're looking for? Gosh, I wouldn't just publish things that has been transcribed with without going over what has been said beforehand, because I know for a fact that it, you know, my transcriber, as good as it is it is doing maybe 90% of the work properly.


Scott Sereboff

But although doing that 90% The work saves you much more time than if you had to do 100% of the work.


Yuval Ackerman

Not quite because if I need to publish this entire transcription, I still have to listen to the whole conversation, maybe in 1.5 or 1.7, and still fix sentences or terms or words.


Scott Sereboff

So are you suggesting I don't think you are that it would be just as easy for you to do it yourself. Maybe interesting. So I tend to agree up to a point with when you look at something like Jasper, where Jasper allows you, you know, you put your little cursor in a box and you click and it will write something based on what you want it to say. And it's good. But it's interesting to watch how good starts to slide down, the longer your pieces, right. The verbal the written AI doesn't really work well. When you get long form writing it starts to get repetitive or goes off on weird tangents. art, music writing, they're all kind of in the same basket and there's There's an AI music site called Sound raw. And it purports to generate music 100% AI, and in the podcast of last week with William McKnight, there's a point at which we actually stop. And I put five pieces of music up some AI, some human, okay, who can tell the difference. But you know, it only works well for 15 to 20 seconds, because it's repetitive. All of the AI music tends to repeat the same theme, as it goes on, becomes easy to tell, writing, I think, today, we sort of see the same thing I can, I could see how it would be super helpful for someone like yourself, or someone like me, for example, doing a fact checking. You tell them, you know, an AI program that you somebody has built to help you go out and make sure things are correct, go find me, you know, three supporting sources. But as a writer, especially with your experience, and your background, and what you do today, what are the things that scare you about it or keep you up at night about it. And I mean, AI, specifically, as a writer.


Yuval Ackerman

Let's start by by mentioning the fact that nothing keeps me up at night. Let's just put that aside for a second. I'm a very good sleeper, thankfully. But as someone with a journalistic background, and you mentioned there something about facts and supporting sources, what I am worried about now that it keeps me up at night, but what I am worried about is that, first of all, what is the fact there is no one universal truth ever? And second of all, the supporting sources? Who Who are they coming from? Where are they coming from? How do they support whatever it is that I need to fact check, you know? So then again, we're going back to this whole concept of, oh, I need to double check what I just got. So I don't know. And this is me being very, very detail oriented. And this is me being also a control freak. But I would go and double check everything either way. So I don't know how much time that saves me. If I'm using AI and need to double check everything and cross check everything?

Scott Sereboff

Well, it certainly shows a different level of moral approach. Some facts are facts was okay, it was Rudy Giuliani standing in front of the Four Seasons hotel, or a landscaping company. The fact is, he was in front of a landscaping company, right there, there are physical, evidential facts, I was driving the car, there's a picture of me driving it, whatever, things like that. Yes, you can, you can check those facts. What I'm more referring to, especially in the writing realm, AI could generate, I shouldn't even say can generate content at a much greater rate than weakened. Which means that the use of AI could allow for flooding the airwaves, if you will, with whatever the hell you wanted to say, in multiple spots about any subject you wanted to. It is in that area that I would go back to kind of how we started in a way, which is, how do you tell the difference? And how important is it for us to know, this is sort of rhetorical, but not we have an upcoming presidential election in the good old USA next next November. When we're on Facebook, and LinkedIn, or whatever, and we're seeing all this stuff about Candidate A or Candidate B. And how the hell did we tell who wrote it? And if it's true, we're just seeing content? What do we do?


Yuval Ackerman

This question is much deeper than what you just presented. Because we don't we have no way of knowing. As of today. We don't even know who who is the person who wrote a piece of content and is this person credible? So this question is much deeper and much more philosophical than just AI. Who are we trusting? And who are we consuming? content through? Is this person reliable? Do they know what the hell they're talking about? Sometimes, yes, but sometimes no. And those are the same people are not those. But basically depending on the Person feeds the AI algorithm with their morals or lack of, of morals, you'll, you'll get similar results, right? A person with a high work ethic and high morals, they would have a whole different kind of content generated with or through AI than people who just want to get the job done. And the hell with the facts. So I think it starts and ends with with the person behind everything. And I think this is what we need to talk about, rather than will the AI generate something that we could trust? I think we need to ask ourselves, who are the people that we trust?

Scott Sereboff

Well, and as we have also spoken about offline, part of the public sense of AI, this this magical, you know, which like, you know, the Good Witch Glinda, the Good Witch, that's what I was looking for, with the magic wand and the beautiful face, and the blond hair and the little cute wings is not correct. Ai, which at its most basic is a bunch of math, a bunch of algorithms right mathematical formula has been set up programming has to be taught what to do. It is, in many ways, like an infant, except not as smart. There's no instinct to it. Chat GPT was trained using hundreds of 1000s of actual words, and sentences and phrases and idioms and slang and every other thing you want to put at any one of us could affect how chat GPT responds with how we trained it. So part of part of the fundamental issue with AI and you're 100% Correct. Do we understand it? Does the average person now where are you sitting right now? What country are you in right now?


Yuval Ackerman

I'm in Spain,


Scott Sereboff

you're in Spain? You're Israeli by birth, in your in Spain, you've lived in Germany. Does the average Spaniard or average German Ravagers really know? What goes behind the curtain? In AI? No. And if the total population of Spain equals 100% 1%, maybe understands it half a percent?


Yuval Ackerman

I don't have the stats about this. But but let's go with this theory of yours.


Scott Sereboff

Well, the theory being 99% of the people out there don't understand how it works. They see it as magic. Don't we have to educate these people on what you just said, don't worry about how the AI created the article. Tell me who was behind the creation of the AI that created the article. And that will help inform me on what their bias was, if any, in the creation of the AI.


Yuval Ackerman

As an optimist, I'm going to say something very pessimist. I don't think it matters. Now, because I don't think it's important. But I think most people because this is how we are wired as humans, especially in 2023. And going onwards, we are incredibly lazy. We want the easy way out. We don't really care how things work. We just want them to work for us. If it works for me, if it serves my I don't know, purpose, or whatever it is that I need it to do for me. It doesn't matter. I'm not saying it's good. No, it's just the way it is.


Scott Sereboff

And I agree with you. But I'm not sure if it's a but I agree with you. But I would hope that if I told someone the articles you're reading were written by an AI that was trained by the American Nazi party. I would hope that those people would then take those articles and set them aside as unreliable and biased horribly towards, you know, anti, whether it was anti Jewish or anti African American, whatever, it doesn't matter, you get the point. Right. If if I told you it was it was done by a fundamentalist Christian organization that thought women should be wearing, you know, red scarves over their heads and having babies. Hopefully everyone would set that aside. The issue isn't that I think it for most people, I think if you gave them the bald faced proof before they got too far down the rabbit hole, which is kind of the key here. They would say, Oh yeah, I don't want to I don't want to listen to that. But if they've already gone down the rabbit hole, you're probably right. You probably can't pull them back. Even when you show them that information. The other the other part of it is? No. I think I understand what you said, no one's gonna look, will anyone take the time to understand where the information came from? And you're saying no.


Yuval Ackerman

And you mentioned there something about the before and after. And this would be a really interesting study, if anyone would actually try it, try to do something with it. But I'm also thinking, and this is just me, hypothesizing out loud, not based on anything, obviously. I think that even if we tell people, before they read something where the source came from, there will be a curiosity gap opened or an open loop right there that we will have to kind of close. So I don't think that as


Scott Sereboff

explain, explain what you mean by that before you go on. Explain.


Yuval Ackerman

Okay. In copywriting and human psychology, there's a whole concept of opening loops, and creating curiosity gaps, meaning giving just bits and pieces of information, just to pique your curiosity enough for you to continue reading or to click on something to see more of. And that's why you see all kinds of in my field, for example, this is how this person achieved that. And then you're thinking, Okay, I must click on this on this link to understand how this person who is, who is someone like me achieved that amazing result? That's what I mean by opening a loop or creating a curiosity gap. And I think we have to kind of close it for ourselves, because otherwise it would, you know, make us go crazy. And I think that, once you tell someone, okay, this piece of information, or that whatever it is, was created by whichever party or whichever stream or whichever even called I don't know, some people would still be intrigued and curious to see what that source of information is. So I'm not sure that all of them, if not most of them, would put that source of information aside, if anything, I'm pretty sure that they will actually go ahead and read it.

Scott Sereboff

interest. So it is not enough to close the curiosity gap. To say to the person, well, this came from the American Nazi Party, that can't close the curiosity gap. They might say, if I understand you correctly, they might say, well, you know, it's the Nazis. They're terrible, but I'm going to, I want to see what this is anyway.

Yuval Ackerman

Exactly. I think if anything, it opens a bigger curiosity gap by you know, you saying, Oh, this is this is nonsense, or, you know, basically suggesting that this is nonsense. Or this is not something that you would resonate with. But still, this was created by someone who's this radical. And today, we are well aware of, you know, radical opinions and how catchy those things are. If I say, I think if you're presenting someone with the notion of this, this was created by someone radical, people would be like, Oh, actually, yeah, we know it's radical. But this is interesting. So we might as well just go ahead and read it.


Scott Sereboff

We also we know from the last 15 are really tend to be more fair ears, that much of what we take in and social media feeds into our own evidenced confirmation bias. So I'm in my head, I'm listening to what you're saying about this curiosity gap. And I'm wondering, let's use an example. That's fairly recent, when when President Obama was running and and people were suggesting he wasn't an American citizen, right. If you have the curiosity gap created by the statement, click here and discover why, you know, Obama is not allowed to be the president. And you clicked on it, and said, I knew he wasn't from America. It's that confirmation bias. Right. Then it renders where the information came from. irrelevant, does it not? For that person for that for that if you've achieved confirmation bias than the fact that it came from the American Nazi Party is irrelevant to you. Because it's it is given you your own proof. I knew they were right. Doesn't matter who told me it's still a fact. In your head.


Yuval Ackerman

In a sense, what you're saying is, yeah, unfortunately, right.


Scott Sereboff

Wow, that's a depressing thought. Largely, largely Well, largely because, you know, when you flood I'd said earlier, you flood the airwaves with content. And, you know, part of the goal of what we're doing with the privacy of me podcasts and video simulcast is staying away from, you know, acknowledged experts, if you will, and trying to have conversations with, you know, quote, real people in quote, who have to deal with this stuff at the level that you and I do. It's depressing and scary because of people's propensity to believe what they're told. Without question. Now, I'm not talking about people that are heavily curious or possessed of the research streak that I really want to know the truth here. But those people are few and far between. And the future you're sort of describing is one of, well, if I can get them curious. And I'm saying if I can confirm their own bias, where they got it doesn't matter, it can be made up out of whole cloth by an AI, in the Skynet headquarters somewhere, and people won't care.


Yuval Ackerman

Yet again, I think we're diving into a much deeper philosophical question of how do we make people care?

Scott Sereboff

The reason why it's an important philosophical question, and you're right, it this is best left for the you know, the Immanuel Kant's of the modern world to kind of suss their way through is an understanding of potential futures as relates to AI. Exactly. Does aI have the potential to continue to erode trust? In fact? Does it have the potential to erode trust in science? Is the creation of content the weapon? Not how it's created? But how much? And if the answer is yes, then AI can certainly play a role in a negative way, because it can just do more than the rest of us.


Yuval Ackerman

I would say it's a yes, but answer. Because I think that big companies like Facebook, like Google, like LinkedIn, need to be aware and need to already start working. If they haven't, they need to start working on a way of optimizing user experience in a way that is in the user's best best interest, which is not how many pieces of content are being produced. And that would be the only metrics metric on which thing something things would be pushed more, or you would be exposed more to these kinds of content, I suppose. But what is the quality of the content that it's is being pushed or posted or shared? And I think, you know, it's a much broader conversation of what brands with responsibility facing the whole AI revolution, as we know is today. But it's a much broader conversation. I think we are in a very interesting point in time, where we need to see how we can actually use AI to our advantage, rather than to our disadvantage. And there are ways to do so obviously, but it would require some sacrifices in order to either obtain or have users or subscribers or or customers trust remaining within our or to have our customers trust


Scott Sereboff

in sacrifice is like what? It's a great word. So what kind of sacrifices


Yuval Ackerman

so we all know how much Facebook for example, is benefiting from from ads. Obviously, they have very strict guidelines when it comes to posting ads and what kinds of ads and what can you use or not use in terms of language or even photos? And I think that if they are not limiting contents created by AI to some extent, then we might have a problem. Because for them, it would mean less money. So basically the sacrifice that I'm talking about is money, right? That's the the currency that we're all talking about.

Scott Sereboff

Yes, of course.

Yuval Ackerman

If basically it would mean that they will have to spend more money and maybe have less money coming in, if they have more stricter regulation on as created with AI,


Scott Sereboff

another deep, deep philosophical well, because Facebook at all, all of them started with the best of intentions. But now the money is so great that their own moral gray areas have widened in scope to encompass a whole lot of things that I'm quite, I believe they never would have agreed to in the beginning, when there was no money, right? Money has a really strange way of creating moral gray area. And it's probably folly for anyone to assume that any of these companies would willingly give up profit in exchange for moderation. Right? They're all really good at saying it's, Hey, we're not responsible for the content. It's somebody else's problem. We're not responsible for people's propensity to believe everything we put on Facebook or LinkedIn, that's someone else's problem. Why are you sure that your livelihood, your job is not at risk? From AI over the incoming decade, people that pay you today for content, or writing or whatever? Just simply using AI in the future? Why are you not afraid of that? Or are you and you're just not admitting it?


Yuval Ackerman

No, I'm actually not afraid of it whatsoever. Because there's two main reasons. The first one is human psychology, which cannot be replicated by AI, as well as humans can. Yeah. And the second thing is a point that we started touching on a few minutes ago, which is the currency of trust. Matter of fact is, money is a money incoming to any kind of brand is a result of this brands, basically creating and nurturing and know like and trust process, as we call it in the copywriting world, in order for a customer to actually invest more than their time and energy, and what else I think there was another one. But let's go with time and energy, and Attention, attention. That was the third one, there we go. So if we're thinking of the fact that people invest time, energy, and attention, in our, in the brands, before they even invest any kind of financial, you know, aspect of things, then you have to nurture some kind of a process of know, like, and trust. And if you are getting to the trust point where people are actually investing money in your brand, and then you're breaking the trust, I would argue and say that, once you've broken that trust, you can very rarely get people back on your

Scott Sereboff

train would break, evolve, what would break the trust, what part of this would be a broken confidence?


Yuval Ackerman

That really depends. But when it comes to AI, I think if a brand, for example, uses AI to generate blog posts, and several of them per month, or per week or per day, and that reader of this brand blog is reading all those blogs, or most of those blogs, and those blogs, promise something and then the reader is going to buy the solution product service, you name it, and left highly disappointed because the blog posts that that reader read and what he got eventually have nothing to do with one another. That's a very clear breach of trust.

Scott Sereboff

I find there's a part of it I agree with and it's a step to the left. When I go on to a website that has a customer service representative that is a chatbot. If I don't have a quick way to get to a human, I will go somewhere else with my business. Because a chatbot can't possibly get the nuance of whether or not I'm I'm angry, or I'm upset or whatever. I mean, I could throw some four letter words at it, but it'll you know, might that will help it but we have nuance that AI is not yet capable of grasping and replicating. Right. But I'm going to challenge you on one part of what You said, which is, if we're talking about short, and blog posts, or something that is short form a tweet, assuming Twitter's around a tweet of some sort, those kinds of short form things, I really wonder, do people that make a living doing that today have a job five years from now, because of the continuing improvement in the ability of AI to write those short form passages. And I think they, I think they will get to that point. And I worry about those types of craters, just like I would, about musicians that write instrumental own, like music, elevator music being replaced by AI. So I think you're wrong.

Yuval Ackerman

But then we have to take into consideration two things, in my opinion. First of and foremost is yet again, the human psychology aspect of things, and how do we connect to Pete to other people through a digital medium, which I think AI, maybe down the line will be able to catch up on, but I highly doubt it. And the second thing that I wanted to mention is that AI will just like any, any other tool out there will differentiate between people who are really great at what they're doing. And professionals that might need to look for another job, you know, because they're not as great as the other people. The other


Scott Sereboff

group, that statement weeds out a certain percentage of people who are making a living doing that thing today.


Yuval Ackerman

Right? Exactly.


Scott Sereboff

A good thing in terms of pushing the very creme de la creme to the to the top, but what do we do with all the people who were average, who no longer can earn a living being average.


Yuval Ackerman

And they will need to, you know, be better, get better, or, at that reduces out of the field.

Scott Sereboff

And, okay, and I understand this is, again, it's a psychological, you know, rabbit hole. But is there a moral difference between losing your job to someone that's better, versus something that's better.


Yuval Ackerman

So here's how I'm thinking about this, right? You can have I'm thinking now as a brand owner might be subcontracting to someone else. I would personally like to have someone who I can keep accountable. And know that I'm getting what I'm asking for even after iterations or revisions or whatnot. But I know that I have someone to trust yet again, we're going back to the trust aspect of things. When brands are using AI, or will use AI, they will have something very generic. But they won't have anyone to take responsibility for it. If that makes any sense.


Scott Sereboff

It makes perfect sense. What you're saying is, effectively, I want to be able to walk down to you vols office or call her or whatever and say I need I need this done again, I need difference. I want you to buy in as an as a person with a personality and a belief system. If let's say you were asked to write for some a cause that you truly believed in, it will be reflected in the art you create. Because of your buy in way more than someone who's just the the English phrase, you know, gun for hire, I don't really care about this, I'm just gonna write something and it'll be good, but I'm not invested. Right? And the people that hire you want to be able to come to you and hold you accountable. Like you said, AI is accountability is repetition. Well, that's suck, do it again. Well, that's bad, do it again. And no AI of today can buy into anything because it it's not even fair to say it could care less because it never cares in the first place. So I agree with you on those points. But there will still be a segment of the business world that doesn't care that I just want content. I want to fill pages. And you know, so maybe I'll get rid of 75% of the people that write but I'm keeping the very top level ones because they give me the buy in and the accountability that I lack from the software program that I bought. But I still worry about those, those people right I know there's a solution because we have not been an industrial society since day one. We've had moves from agrarian to industrial, from industrial to technical, from countryside to city, blah, blah, blah. So there are solutions there. Right. But I don't know what they are just yet. And this is going so much faster. When was when did we first all hear about chat? GPT? It seems like it was five minutes ago.


Yuval Ackerman

Yeah, like maybe two, three months ago.


Scott Sereboff

Okay. And in that two, three months, it seems to have become this, this incredibly advanced monster of a system. Now, it may have been that beforehand, but I think our shared point here is, this speed of this stuff is overwhelming. You can't even blink. And you have chat GPT. Four. By the time I learned how to use it, it'll be chat GPT, eight. Right? How do we cope with the speed of this? And that's, again, philosophical, but it's not the same thing as moving over generations from from, you know, raising corn to processing corn and a plant. This is, this is right now.

Yuval Ackerman

Yeah, and there is something scary about this. But just to close my argument from before, please, people like making business with other people. And so when you know that there's AI involved, there's only so much and I need to kind of put an asterix here. There are a lot of really good brands using AI today, for doing great things I've worked with several of them, I truly believe in what they're doing, and the change that they're bringing to the world. But the AI is just the back end of things, not the front end of things. And at the end of the day, people still like making business or doing business with other people. And that I don't think will ever change. And so you can have all the long form blog posts being posted at I don't know the speed of lights, every single second are other seconds on your blog. But then again, what is it creating? You know, and even though it's extremely fast, and it's getting better all the time? What are we as brands put in the back end? And what do we put at the front end? And also what kind of liability and level of responsibility do we take as brands who are publishing that? Regardless of which version of AI created? Whichever piece of content that our subscribers or readers read? Do we stand behind those things? And how much?


Scott Sereboff

You again, make a fantastic point, there was a there was a time when customer service in the United States became heavily outsourced. And I'm not saying it isn't today, but people started to advertise on our support is based in the US. I can see that in what you just said. There'll be a point where people can say, our staff is human, oh, writers are real life people, right? Because there will be a plethora of businesses, etc, that use AI and they can get stuff out fast and it's efficient and all the rest of it. And the use of the human touch will have a bit of a boutique atmosphere where your companies will say, I've got Yuval, not the AI version of you all, but actually her doing the writing.


Yuval Ackerman

Exactly. But if we're going back to the whole chat bots, or customer support, you just reminded me that I wanted to say something about the chat bots versus a real person. Because chatbots can be really, really great at what they're doing. They can be also increasingly and like, annoying as you know, the most annoying things in the world. But I gotta tell you, you know, you were saying those things about chatbots and customer service. And I could not help but think of my my own email service provider which will remain nameless because I have nothing other than bad things to say about them. So they their human customer support is horrendous. I cannot tell you how many hours I've spent with humans online, on chats, emails, not on the phone because that's not a part of their deal. But even when I He did get to someone on the phone, they were useless to me, because they didn't know how to solve my problem, multiple problems. And so if we're talking about AI versus humans in customer support, which is a huge part of what I'm doing through emails, by the way, for the companies that I'm working with, we have to take into consideration that it's not who, again, it's not who is giving you the support, but how do they provide you the help that you need, because it doesn't matter if it's a human, or if it's a bot, if this person is located, or based in the United States, or somewhere in Asia, if the level of support you're getting is subpar, it doesn't matter who gives that support to you, because you will remain frustrated as I am right now.

Scott Sereboff

And I would ask you to consider who tells those chatbots? What to do and who tells those humans what to do. And if you have bad service from a person which exists everywhere in the world, right? I've had phenomenal support from from people in Bangalore, and I've had awful support from people from Bangor, Maine, right? It doesn't matter. But if I have bad support with a human and the same people put the bot together, I'm gonna get bad support from the bot. If I have good support, I'll get good support that goes right back to what's behind the curtain and the man behind the curtain? Who told the Chatbot what to do will be the determining factor on whether or not it's good or not. Because as the humans go, so goes, the AI, it's one of the things that I find very interesting about generative AI, when you turn it loose and tell it okay, you're you fly, be free, you can jump out of the nest. Where does it get the starting point of that generative intelligence? And that's it, it will be interesting to see how if it if these systems are going out and crawling through the internet, to gain intelligence, from where they started? What's the intelligence they're gaining? And how will it affect the moving forward part, to your point about service companies that have really bad service and lots of revenue have obviously long since. But you know, we have our customers don't care, we can treat them like dogs, and they'll still buy. And companies that are trying to grow, will have good service because they know what attracts people, generally speaking, right? So there's a lot of question there you made you make a very, very good point it, which I'll summarize by saying, it doesn't matter where the bad support came from. What's important is where did they learn it? Where they learn that bad support? Because if the person did it, the AI will did it? Is that you know, given the opportunity to have a platform? What would you tell? Let's make it very relevant to you? What would you tell someone that's looking to build a career, in writing or in your areas of expertise? Today, we're just at the very beginning of the effects of AI, what would you tell them? How would you tell them to plan?


Yuval Ackerman

I think there's no replacement for understanding human psychology. And so regardless of how successful AI may or may not be in the future, or even tomorrow, which is still the future, you cannot replace how humans interact with one another. And with what they see out there. Because we are bombarded by 1000s of marketing messages every single day that was proven by multiple studies. But we are in an era where we need to stand up and differentiate ourselves as I'm talking about brands for a second. If we want to stand out in someone's inbox, we have to first and foremost nurture a human connection with our subscribers right from the get go. And that's not something that AI I think still manages to do. And so AI is I would be a bit radical and I would say AI is an excuse for you as a writer not to get better at your craft. And so, understand human psychology and be open to testing because marketing specifically email marketing ng is all about testing, there is no one size fits all, no cookie cutters, best practices are mostly not the best practices by now. And so if you really want to get your career going, start with understanding the humans that you're trying to communicate with and to.


Scott Sereboff

So you would tell the incoming writer, probably what they've been told forever focus on your craft. Be the best, you know, be the best writer, you can whatever, but create that and work within that human to human connection. Absolutely, would you suggest to this, this nascent writer who comes to you for career advice that they should get themselves an understanding of AI as relates to writing?

Yuval Ackerman

It depends what you want to do with it. Just kind of what kind of writer you are


Scott Sereboff

just in general, because you can argue that AI is the competition. I would want to understand how it works.


Yuval Ackerman

For some, it would be a good idea. And that's for everyone. Fair.

Scott Sereboff

Where does the where does the general public go to find out more about you and the kind of services you provide?

Yuval Ackerman

Sure, either through my website ackermancopywriting.com. You can join my email list, which you will find in a not annoying pop up!


Scott Sereboff

very ethical emails, by the way, very ethical email


Yuval Ackerman

list, a very ethical email list, ethical Friday newsletter every Friday. And on LinkedIn, with my full name, Yuval Ackerman, send me a message, join my email list if you want to learn more about how to do emails the right way. Yeah, and that's basically it.

Scott Sereboff

So we'll have on the on the bottom of the screen here, scrolling across, you can see where it says www.ackermancopywriting.com, and you can find her on LinkedIn, and Yuval, thank you very much for an entertaining and informative hour of discussion about AI and the written word.


Yuval Ackerman

Thank you, Scott. It was a blast.


Scott Sereboff

Absolutely everyone. Be sure to join us again next week. We'll have another great conversation with another great professional in the industry. So until then, just everyone stay safe and this has been the privacy of me

35 views0 comments

Comments


bottom of page