ABOUT THE SPEAKER
Damon Horowitz - Philosopher, entrepreneur
Damon Horowitz explores what is possible at the boundaries of technology and the humanities.

Why you should listen

Damon Horowitz is a philosophy professor and serial entrepreneur. He recently joined Google as In-House Philosopher / Director of Engineering, heading development of several initiatives involving social and search. He came to Google from Aardvark, the social search engine, where he was co-founder and CTO, overseeing product development and research strategy. Prior to Aardvark, Horowitz built several companies around applications of intelligent language processing. He co-founded Perspecta (acquired by Excite), was lead architect for Novation Biosciences (acquired by Agilent), and co-founded NewsDB (now Daylife).

Horowitz teaches courses in philosophy, cognitive science, and computer science at several institutions, including Stanford, NYU, University of Pennsylvania and San Quentin State Prison.

Get more information on the Prison University Project >>

More profile about the speaker
Damon Horowitz | Speaker | TED.com
TEDxSiliconValley

Damon Horowitz: We need a "moral operating system"

Filmed:
795,617 views

Damon Horowitz reviews the enormous new powers that technology gives us: to know more -- and more about each other -- than ever before. Drawing the audience into a philosophical discussion, Horowitz invites us to pay new attention to the basic philosophy -- the ethical principles -- behind the burst of invention remaking our world. Where's the moral operating system that allows us to make sense of it?
- Philosopher, entrepreneur
Damon Horowitz explores what is possible at the boundaries of technology and the humanities. Full bio

Double-click the English transcript below to play the video.

00:15
Power.
0
0
2000
00:17
That is the word that comes to mind.
1
2000
2000
00:19
We're the new technologists.
2
4000
2000
00:21
We have a lot of data, so we have a lot of power.
3
6000
3000
00:24
How much power do we have?
4
9000
2000
00:26
Scene from a movie: "Apocalypse Now" -- great movie.
5
11000
3000
00:29
We've got to get our hero, Captain Willard, to the mouth of the Nung River
6
14000
3000
00:32
so he can go pursue Colonel Kurtz.
7
17000
2000
00:34
The way we're going to do this is fly him in and drop him off.
8
19000
2000
00:36
So the scene:
9
21000
2000
00:38
the sky is filled with this fleet of helicopters carrying him in.
10
23000
3000
00:41
And there's this loud, thrilling music in the background,
11
26000
2000
00:43
this wild music.
12
28000
2000
00:45
♫ Dum da ta da dum ♫
13
30000
2000
00:47
♫ Dum da ta da dum ♫
14
32000
2000
00:49
♫ Da ta da da ♫
15
34000
3000
00:52
That's a lot of power.
16
37000
2000
00:54
That's the kind of power I feel in this room.
17
39000
2000
00:56
That's the kind of power we have
18
41000
2000
00:58
because of all of the data that we have.
19
43000
2000
01:00
Let's take an example.
20
45000
2000
01:02
What can we do
21
47000
2000
01:04
with just one person's data?
22
49000
3000
01:07
What can we do
23
52000
2000
01:09
with that guy's data?
24
54000
2000
01:11
I can look at your financial records.
25
56000
2000
01:13
I can tell if you pay your bills on time.
26
58000
2000
01:15
I know if you're good to give a loan to.
27
60000
2000
01:17
I can look at your medical records; I can see if your pump is still pumping --
28
62000
3000
01:20
see if you're good to offer insurance to.
29
65000
3000
01:23
I can look at your clicking patterns.
30
68000
2000
01:25
When you come to my website, I actually know what you're going to do already
31
70000
3000
01:28
because I've seen you visit millions of websites before.
32
73000
2000
01:30
And I'm sorry to tell you,
33
75000
2000
01:32
you're like a poker player, you have a tell.
34
77000
2000
01:34
I can tell with data analysis what you're going to do
35
79000
2000
01:36
before you even do it.
36
81000
2000
01:38
I know what you like. I know who you are,
37
83000
3000
01:41
and that's even before I look at your mail
38
86000
2000
01:43
or your phone.
39
88000
2000
01:45
Those are the kinds of things we can do
40
90000
2000
01:47
with the data that we have.
41
92000
3000
01:50
But I'm not actually here to talk about what we can do.
42
95000
3000
01:56
I'm here to talk about what we should do.
43
101000
3000
02:00
What's the right thing to do?
44
105000
3000
02:04
Now I see some puzzled looks
45
109000
2000
02:06
like, "Why are you asking us what's the right thing to do?
46
111000
3000
02:09
We're just building this stuff. Somebody else is using it."
47
114000
3000
02:12
Fair enough.
48
117000
3000
02:15
But it brings me back.
49
120000
2000
02:17
I think about World War II --
50
122000
2000
02:19
some of our great technologists then,
51
124000
2000
02:21
some of our great physicists,
52
126000
2000
02:23
studying nuclear fission and fusion --
53
128000
2000
02:25
just nuclear stuff.
54
130000
2000
02:27
We gather together these physicists in Los Alamos
55
132000
3000
02:30
to see what they'll build.
56
135000
3000
02:33
We want the people building the technology
57
138000
3000
02:36
thinking about what we should be doing with the technology.
58
141000
3000
02:41
So what should we be doing with that guy's data?
59
146000
3000
02:44
Should we be collecting it, gathering it,
60
149000
3000
02:47
so we can make his online experience better?
61
152000
2000
02:49
So we can make money?
62
154000
2000
02:51
So we can protect ourselves
63
156000
2000
02:53
if he was up to no good?
64
158000
2000
02:55
Or should we respect his privacy,
65
160000
3000
02:58
protect his dignity and leave him alone?
66
163000
3000
03:02
Which one is it?
67
167000
3000
03:05
How should we figure it out?
68
170000
2000
03:07
I know: crowdsource. Let's crowdsource this.
69
172000
3000
03:11
So to get people warmed up,
70
176000
3000
03:14
let's start with an easy question --
71
179000
2000
03:16
something I'm sure everybody here has an opinion about:
72
181000
3000
03:19
iPhone versus Android.
73
184000
2000
03:21
Let's do a show of hands -- iPhone.
74
186000
3000
03:24
Uh huh.
75
189000
2000
03:26
Android.
76
191000
3000
03:29
You'd think with a bunch of smart people
77
194000
2000
03:31
we wouldn't be such suckers just for the pretty phones.
78
196000
2000
03:33
(Laughter)
79
198000
2000
03:35
Next question,
80
200000
2000
03:37
a little bit harder.
81
202000
2000
03:39
Should we be collecting all of that guy's data
82
204000
2000
03:41
to make his experiences better
83
206000
2000
03:43
and to protect ourselves in case he's up to no good?
84
208000
3000
03:46
Or should we leave him alone?
85
211000
2000
03:48
Collect his data.
86
213000
3000
03:53
Leave him alone.
87
218000
3000
03:56
You're safe. It's fine.
88
221000
2000
03:58
(Laughter)
89
223000
2000
04:00
Okay, last question --
90
225000
2000
04:02
harder question --
91
227000
2000
04:04
when trying to evaluate
92
229000
3000
04:07
what we should do in this case,
93
232000
3000
04:10
should we use a Kantian deontological moral framework,
94
235000
4000
04:14
or should we use a Millian consequentialist one?
95
239000
3000
04:19
Kant.
96
244000
3000
04:22
Mill.
97
247000
3000
04:25
Not as many votes.
98
250000
2000
04:27
(Laughter)
99
252000
3000
04:30
Yeah, that's a terrifying result.
100
255000
3000
04:34
Terrifying, because we have stronger opinions
101
259000
4000
04:38
about our hand-held devices
102
263000
2000
04:40
than about the moral framework
103
265000
2000
04:42
we should use to guide our decisions.
104
267000
2000
04:44
How do we know what to do with all the power we have
105
269000
3000
04:47
if we don't have a moral framework?
106
272000
3000
04:50
We know more about mobile operating systems,
107
275000
3000
04:53
but what we really need is a moral operating system.
108
278000
3000
04:58
What's a moral operating system?
109
283000
2000
05:00
We all know right and wrong, right?
110
285000
2000
05:02
You feel good when you do something right,
111
287000
2000
05:04
you feel bad when you do something wrong.
112
289000
2000
05:06
Our parents teach us that: praise with the good, scold with the bad.
113
291000
3000
05:09
But how do we figure out what's right and wrong?
114
294000
3000
05:12
And from day to day, we have the techniques that we use.
115
297000
3000
05:15
Maybe we just follow our gut.
116
300000
3000
05:18
Maybe we take a vote -- we crowdsource.
117
303000
3000
05:21
Or maybe we punt --
118
306000
2000
05:23
ask the legal department, see what they say.
119
308000
3000
05:26
In other words, it's kind of random,
120
311000
2000
05:28
kind of ad hoc,
121
313000
2000
05:30
how we figure out what we should do.
122
315000
3000
05:33
And maybe, if we want to be on surer footing,
123
318000
3000
05:36
what we really want is a moral framework that will help guide us there,
124
321000
3000
05:39
that will tell us what kinds of things are right and wrong in the first place,
125
324000
3000
05:42
and how would we know in a given situation what to do.
126
327000
4000
05:46
So let's get a moral framework.
127
331000
2000
05:48
We're numbers people, living by numbers.
128
333000
3000
05:51
How can we use numbers
129
336000
2000
05:53
as the basis for a moral framework?
130
338000
3000
05:56
I know a guy who did exactly that.
131
341000
3000
05:59
A brilliant guy --
132
344000
3000
06:02
he's been dead 2,500 years.
133
347000
3000
06:05
Plato, that's right.
134
350000
2000
06:07
Remember him -- old philosopher?
135
352000
2000
06:09
You were sleeping during that class.
136
354000
3000
06:12
And Plato, he had a lot of the same concerns that we did.
137
357000
2000
06:14
He was worried about right and wrong.
138
359000
2000
06:16
He wanted to know what is just.
139
361000
2000
06:18
But he was worried that all we seem to be doing
140
363000
2000
06:20
is trading opinions about this.
141
365000
2000
06:22
He says something's just. She says something else is just.
142
367000
3000
06:25
It's kind of convincing when he talks and when she talks too.
143
370000
2000
06:27
I'm just going back and forth; I'm not getting anywhere.
144
372000
2000
06:29
I don't want opinions; I want knowledge.
145
374000
3000
06:32
I want to know the truth about justice --
146
377000
3000
06:35
like we have truths in math.
147
380000
3000
06:38
In math, we know the objective facts.
148
383000
3000
06:41
Take a number, any number -- two.
149
386000
2000
06:43
Favorite number. I love that number.
150
388000
2000
06:45
There are truths about two.
151
390000
2000
06:47
If you've got two of something,
152
392000
2000
06:49
you add two more, you get four.
153
394000
2000
06:51
That's true no matter what thing you're talking about.
154
396000
2000
06:53
It's an objective truth about the form of two,
155
398000
2000
06:55
the abstract form.
156
400000
2000
06:57
When you have two of anything -- two eyes, two ears, two noses,
157
402000
2000
06:59
just two protrusions --
158
404000
2000
07:01
those all partake of the form of two.
159
406000
3000
07:04
They all participate in the truths that two has.
160
409000
4000
07:08
They all have two-ness in them.
161
413000
2000
07:10
And therefore, it's not a matter of opinion.
162
415000
3000
07:13
What if, Plato thought,
163
418000
2000
07:15
ethics was like math?
164
420000
2000
07:17
What if there were a pure form of justice?
165
422000
3000
07:20
What if there are truths about justice,
166
425000
2000
07:22
and you could just look around in this world
167
427000
2000
07:24
and see which things participated,
168
429000
2000
07:26
partook of that form of justice?
169
431000
3000
07:29
Then you would know what was really just and what wasn't.
170
434000
3000
07:32
It wouldn't be a matter
171
437000
2000
07:34
of just opinion or just appearances.
172
439000
3000
07:37
That's a stunning vision.
173
442000
2000
07:39
I mean, think about that. How grand. How ambitious.
174
444000
3000
07:42
That's as ambitious as we are.
175
447000
2000
07:44
He wants to solve ethics.
176
449000
2000
07:46
He wants objective truths.
177
451000
2000
07:48
If you think that way,
178
453000
3000
07:51
you have a Platonist moral framework.
179
456000
3000
07:54
If you don't think that way,
180
459000
2000
07:56
well, you have a lot of company in the history of Western philosophy,
181
461000
2000
07:58
because the tidy idea, you know, people criticized it.
182
463000
3000
08:01
Aristotle, in particular, he was not amused.
183
466000
3000
08:04
He thought it was impractical.
184
469000
3000
08:07
Aristotle said, "We should seek only so much precision in each subject
185
472000
4000
08:11
as that subject allows."
186
476000
2000
08:13
Aristotle thought ethics wasn't a lot like math.
187
478000
3000
08:16
He thought ethics was a matter of making decisions in the here-and-now
188
481000
3000
08:19
using our best judgment
189
484000
2000
08:21
to find the right path.
190
486000
2000
08:23
If you think that, Plato's not your guy.
191
488000
2000
08:25
But don't give up.
192
490000
2000
08:27
Maybe there's another way
193
492000
2000
08:29
that we can use numbers as the basis of our moral framework.
194
494000
3000
08:33
How about this:
195
498000
2000
08:35
What if in any situation you could just calculate,
196
500000
3000
08:38
look at the choices,
197
503000
2000
08:40
measure out which one's better and know what to do?
198
505000
3000
08:43
That sound familiar?
199
508000
2000
08:45
That's a utilitarian moral framework.
200
510000
3000
08:48
John Stuart Mill was a great advocate of this --
201
513000
2000
08:50
nice guy besides --
202
515000
2000
08:52
and only been dead 200 years.
203
517000
2000
08:54
So basis of utilitarianism --
204
519000
2000
08:56
I'm sure you're familiar at least.
205
521000
2000
08:58
The three people who voted for Mill before are familiar with this.
206
523000
2000
09:00
But here's the way it works.
207
525000
2000
09:02
What if morals, what if what makes something moral
208
527000
3000
09:05
is just a matter of if it maximizes pleasure
209
530000
2000
09:07
and minimizes pain?
210
532000
2000
09:09
It does something intrinsic to the act.
211
534000
3000
09:12
It's not like its relation to some abstract form.
212
537000
2000
09:14
It's just a matter of the consequences.
213
539000
2000
09:16
You just look at the consequences
214
541000
2000
09:18
and see if, overall, it's for the good or for the worse.
215
543000
2000
09:20
That would be simple. Then we know what to do.
216
545000
2000
09:22
Let's take an example.
217
547000
2000
09:24
Suppose I go up
218
549000
2000
09:26
and I say, "I'm going to take your phone."
219
551000
2000
09:28
Not just because it rang earlier,
220
553000
2000
09:30
but I'm going to take it because I made a little calculation.
221
555000
3000
09:33
I thought, that guy looks suspicious.
222
558000
3000
09:36
And what if he's been sending little messages to Bin Laden's hideout --
223
561000
3000
09:39
or whoever took over after Bin Laden --
224
564000
2000
09:41
and he's actually like a terrorist, a sleeper cell.
225
566000
3000
09:44
I'm going to find that out, and when I find that out,
226
569000
3000
09:47
I'm going to prevent a huge amount of damage that he could cause.
227
572000
3000
09:50
That has a very high utility to prevent that damage.
228
575000
3000
09:53
And compared to the little pain that it's going to cause --
229
578000
2000
09:55
because it's going to be embarrassing when I'm looking on his phone
230
580000
2000
09:57
and seeing that he has a Farmville problem and that whole bit --
231
582000
3000
10:00
that's overwhelmed
232
585000
3000
10:03
by the value of looking at the phone.
233
588000
2000
10:05
If you feel that way,
234
590000
2000
10:07
that's a utilitarian choice.
235
592000
3000
10:10
But maybe you don't feel that way either.
236
595000
3000
10:13
Maybe you think, it's his phone.
237
598000
2000
10:15
It's wrong to take his phone
238
600000
2000
10:17
because he's a person
239
602000
2000
10:19
and he has rights and he has dignity,
240
604000
2000
10:21
and we can't just interfere with that.
241
606000
2000
10:23
He has autonomy.
242
608000
2000
10:25
It doesn't matter what the calculations are.
243
610000
2000
10:27
There are things that are intrinsically wrong --
244
612000
3000
10:30
like lying is wrong,
245
615000
2000
10:32
like torturing innocent children is wrong.
246
617000
3000
10:35
Kant was very good on this point,
247
620000
3000
10:38
and he said it a little better than I'll say it.
248
623000
2000
10:40
He said we should use our reason
249
625000
2000
10:42
to figure out the rules by which we should guide our conduct,
250
627000
3000
10:45
and then it is our duty to follow those rules.
251
630000
3000
10:48
It's not a matter of calculation.
252
633000
3000
10:51
So let's stop.
253
636000
2000
10:53
We're right in the thick of it, this philosophical thicket.
254
638000
3000
10:56
And this goes on for thousands of years,
255
641000
3000
10:59
because these are hard questions,
256
644000
2000
11:01
and I've only got 15 minutes.
257
646000
2000
11:03
So let's cut to the chase.
258
648000
2000
11:05
How should we be making our decisions?
259
650000
4000
11:09
Is it Plato, is it Aristotle, is it Kant, is it Mill?
260
654000
3000
11:12
What should we be doing? What's the answer?
261
657000
2000
11:14
What's the formula that we can use in any situation
262
659000
3000
11:17
to determine what we should do,
263
662000
2000
11:19
whether we should use that guy's data or not?
264
664000
2000
11:21
What's the formula?
265
666000
3000
11:25
There's not a formula.
266
670000
2000
11:29
There's not a simple answer.
267
674000
2000
11:31
Ethics is hard.
268
676000
3000
11:34
Ethics requires thinking.
269
679000
3000
11:38
And that's uncomfortable.
270
683000
2000
11:40
I know; I spent a lot of my career
271
685000
2000
11:42
in artificial intelligence,
272
687000
2000
11:44
trying to build machines that could do some of this thinking for us,
273
689000
3000
11:47
that could give us answers.
274
692000
2000
11:49
But they can't.
275
694000
2000
11:51
You can't just take human thinking
276
696000
2000
11:53
and put it into a machine.
277
698000
2000
11:55
We're the ones who have to do it.
278
700000
3000
11:58
Happily, we're not machines, and we can do it.
279
703000
3000
12:01
Not only can we think,
280
706000
2000
12:03
we must.
281
708000
2000
12:05
Hannah Arendt said,
282
710000
2000
12:07
"The sad truth
283
712000
2000
12:09
is that most evil done in this world
284
714000
2000
12:11
is not done by people
285
716000
2000
12:13
who choose to be evil.
286
718000
2000
12:15
It arises from not thinking."
287
720000
3000
12:18
That's what she called the "banality of evil."
288
723000
4000
12:22
And the response to that
289
727000
2000
12:24
is that we demand the exercise of thinking
290
729000
2000
12:26
from every sane person.
291
731000
3000
12:29
So let's do that. Let's think.
292
734000
2000
12:31
In fact, let's start right now.
293
736000
3000
12:34
Every person in this room do this:
294
739000
3000
12:37
think of the last time you had a decision to make
295
742000
3000
12:40
where you were worried to do the right thing,
296
745000
2000
12:42
where you wondered, "What should I be doing?"
297
747000
2000
12:44
Bring that to mind,
298
749000
2000
12:46
and now reflect on that
299
751000
2000
12:48
and say, "How did I come up that decision?
300
753000
3000
12:51
What did I do? Did I follow my gut?
301
756000
3000
12:54
Did I have somebody vote on it? Or did I punt to legal?"
302
759000
2000
12:56
Or now we have a few more choices.
303
761000
3000
12:59
"Did I evaluate what would be the highest pleasure
304
764000
2000
13:01
like Mill would?
305
766000
2000
13:03
Or like Kant, did I use reason to figure out what was intrinsically right?"
306
768000
3000
13:06
Think about it. Really bring it to mind. This is important.
307
771000
3000
13:09
It is so important
308
774000
2000
13:11
we are going to spend 30 seconds of valuable TEDTalk time
309
776000
2000
13:13
doing nothing but thinking about this.
310
778000
2000
13:15
Are you ready? Go.
311
780000
2000
13:33
Stop. Good work.
312
798000
3000
13:36
What you just did,
313
801000
2000
13:38
that's the first step towards taking responsibility
314
803000
2000
13:40
for what we should do with all of our power.
315
805000
3000
13:45
Now the next step -- try this.
316
810000
3000
13:49
Go find a friend and explain to them
317
814000
2000
13:51
how you made that decision.
318
816000
2000
13:53
Not right now. Wait till I finish talking.
319
818000
2000
13:55
Do it over lunch.
320
820000
2000
13:57
And don't just find another technologist friend;
321
822000
3000
14:00
find somebody different than you.
322
825000
2000
14:02
Find an artist or a writer --
323
827000
2000
14:04
or, heaven forbid, find a philosopher and talk to them.
324
829000
3000
14:07
In fact, find somebody from the humanities.
325
832000
2000
14:09
Why? Because they think about problems
326
834000
2000
14:11
differently than we do as technologists.
327
836000
2000
14:13
Just a few days ago, right across the street from here,
328
838000
3000
14:16
there was hundreds of people gathered together.
329
841000
2000
14:18
It was technologists and humanists
330
843000
2000
14:20
at that big BiblioTech Conference.
331
845000
2000
14:22
And they gathered together
332
847000
2000
14:24
because the technologists wanted to learn
333
849000
2000
14:26
what it would be like to think from a humanities perspective.
334
851000
3000
14:29
You have someone from Google
335
854000
2000
14:31
talking to someone who does comparative literature.
336
856000
2000
14:33
You're thinking about the relevance of 17th century French theater --
337
858000
3000
14:36
how does that bear upon venture capital?
338
861000
2000
14:38
Well that's interesting. That's a different way of thinking.
339
863000
3000
14:41
And when you think in that way,
340
866000
2000
14:43
you become more sensitive to the human considerations,
341
868000
3000
14:46
which are crucial to making ethical decisions.
342
871000
3000
14:49
So imagine that right now
343
874000
2000
14:51
you went and you found your musician friend.
344
876000
2000
14:53
And you're telling him what we're talking about,
345
878000
3000
14:56
about our whole data revolution and all this --
346
881000
2000
14:58
maybe even hum a few bars of our theme music.
347
883000
2000
15:00
♫ Dum ta da da dum dum ta da da dum ♫
348
885000
3000
15:03
Well, your musician friend will stop you and say,
349
888000
2000
15:05
"You know, the theme music
350
890000
2000
15:07
for your data revolution,
351
892000
2000
15:09
that's an opera, that's Wagner.
352
894000
2000
15:11
It's based on Norse legend.
353
896000
2000
15:13
It's Gods and mythical creatures
354
898000
2000
15:15
fighting over magical jewelry."
355
900000
3000
15:19
That's interesting.
356
904000
3000
15:22
Now it's also a beautiful opera,
357
907000
3000
15:25
and we're moved by that opera.
358
910000
3000
15:28
We're moved because it's about the battle
359
913000
2000
15:30
between good and evil,
360
915000
2000
15:32
about right and wrong.
361
917000
2000
15:34
And we care about right and wrong.
362
919000
2000
15:36
We care what happens in that opera.
363
921000
3000
15:39
We care what happens in "Apocalypse Now."
364
924000
3000
15:42
And we certainly care
365
927000
2000
15:44
what happens with our technologies.
366
929000
2000
15:46
We have so much power today,
367
931000
2000
15:48
it is up to us to figure out what to do,
368
933000
3000
15:51
and that's the good news.
369
936000
2000
15:53
We're the ones writing this opera.
370
938000
3000
15:56
This is our movie.
371
941000
2000
15:58
We figure out what will happen with this technology.
372
943000
3000
16:01
We determine how this will all end.
373
946000
3000
16:04
Thank you.
374
949000
2000
16:06
(Applause)
375
951000
5000

▲Back to top

ABOUT THE SPEAKER
Damon Horowitz - Philosopher, entrepreneur
Damon Horowitz explores what is possible at the boundaries of technology and the humanities.

Why you should listen

Damon Horowitz is a philosophy professor and serial entrepreneur. He recently joined Google as In-House Philosopher / Director of Engineering, heading development of several initiatives involving social and search. He came to Google from Aardvark, the social search engine, where he was co-founder and CTO, overseeing product development and research strategy. Prior to Aardvark, Horowitz built several companies around applications of intelligent language processing. He co-founded Perspecta (acquired by Excite), was lead architect for Novation Biosciences (acquired by Agilent), and co-founded NewsDB (now Daylife).

Horowitz teaches courses in philosophy, cognitive science, and computer science at several institutions, including Stanford, NYU, University of Pennsylvania and San Quentin State Prison.

Get more information on the Prison University Project >>

More profile about the speaker
Damon Horowitz | Speaker | TED.com

Data provided by TED.

This site was created in May 2015 and the last update was on January 12, 2020. It will no longer be updated.

We are currently creating a new site called "eng.lish.video" and would be grateful if you could access it.

If you have any questions or suggestions, please feel free to write comments in your language on the contact form.

Privacy Policy

Developer's Blog

Buy Me A Coffee