Reply
 
LinkBack Thread Tools Search this Thread Display Modes
  #31   Report Post  
Old March 28th 07, 06:45 AM posted to rec.radio.shortwave
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 726
Default Eduardo - don't let the door hit you in the ass on the way out !


"D Peter Maus" wrote in message
...
David Eduardo wrote:
"D Peter Maus" wrote in message
...
David Eduardo wrote:
"D Peter Maus" wrote in message
...
From a sample of the listeners. The rest of the process is
evaluation and interpretation by Consultants and PD's.

While some perceptual research requires interpretation, a music test
does not. It is simply play the top songs more than the next group, and
don't play the bad ones.

You've just made my point for me.


But there is no interpretation there. It is just play the number of good
songs needed, in ranking order, that you need to create the station
library. Unless a staition plays a lesser scoring song more than a top
scorer, there has been no interpretation. If a ball game ends with a 5 to
7 score, the one with 7 wins... similarly no interpretation.





Ok. Let's look at that.

How deep do you go on the list? You get one hundred songs rated, how
deep do you go? How do you tier them?


We already know that small variations are insignificant... just as we know
the Census was off by a rang of plus and minus 4 points.

So we can really look at the songs based on belonging to sets. All those in
a particular set have, statistically, the same score within the margin of
error. Each set is a tier. Each rotates in proportion to popularity.

Every format has a close range, in any market, of songs that are more
positive than negative. So your basis is songs that are positive without
significant negatives. You get your playlist without looking at artist and
title.

How do you rotate them.


In proportion to popularity.

With what do you mix them?


Other songs of different levels of hittiness.

How old do you go?


As far as the listeners tell you.

What's your current/gold ratio?


In proportion to the currents vs. golds that test.

How many categories of Recurrent do you have?


Depends how many recurrents test. Many staitons change categories and
rotations based on what tests... if there are fewer recuerrents, then the
category(ies) are smaller.

Do you play the top 10 straight up, or do you break it up into top 10 and
top 5 like WLS did in the 70's.


Based, here, on weekly call out. There are seldom more than 3 very hot
songs, and they rotate faster if they exist. If there are no standouts, then
they rotate alike.

With different rotational periods for every category.


Based on scores and the math of rotation. Like a big swiss watch full of
gears.

There you have four decisions, interpretations, to be made by
Manglement that separates a list of songs from Music Programming.


And all four are really determined by the characteristics of the songs that
tested.

Your sample picks songs.

Your PD (Local or Regional) and his/her Consultant programs the music.


There are very few consulted stations these days, and the ones that do exist
do not program the music... they help the PD develop and implement
programming strategy... and maybe to implement the test. The PDs, except for
the untestable new songs, only play what the listeners picked in proportion
to scores.


  #32   Report Post  
Old March 28th 07, 06:46 AM posted to rec.radio.shortwave
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 726
Default Eduardo - don't let the door hit you in the ass on the way out !


"Brenda Ann" wrote in message
.. .

" David Eduardo wrote:
In fact, the Census Bureau fairly conclusively showed that the Census
could be done more accurately by a sample than a census... the problem
is the constitution requires, specifically, a census.



Say WHAT?????

I don't care how good your stats happen to be, there is no way in hell
that a sample of less than 100% can be as accurate, let alone MORE
accurate, than a sample of 100%.


That is because in today's America, doing a true Census where everyone is
counted, and only counted once, and counted in the proper place, is
impossible. That is why the Census has a significant margin of error.

That's blowing smoke into anal crevices. Period. That's the same warped
logic that tries to convince people that a 16Kb digital stream sounds as
good as a 15KHz analog signal.


You might read Bob Orban's statements on psychoacoustics. You are comparing
apples and oranges.

There's just not enough samples there to get an accurate representation of
the original analog. And I don't care what anyone says, there's no way that
digital will ever be 'as good as analog', let alone better, because to get
a perfect representation of the original analog waveform (especially a
complex waveform) you would have to have an infinite number of samples.


Are you suggesting an analog Census as opposed to a digital one? We were
discussing the Census, and you are backtracking.


  #33   Report Post  
Old March 28th 07, 06:47 AM posted to rec.radio.shortwave
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 726
Default Eduardo - don't let the door hit you in the ass on the way out !


"Brenda Ann" wrote in message
.. .

"Brenda Ann" wrote in message
.. .

" David Eduardo wrote:
In fact, the Census Bureau fairly conclusively showed that the Census
could be done more accurately by a sample than a census... the problem
is the constitution requires, specifically, a census.


Say WHAT?????

I don't care how good your stats happen to be, there is no way in hell
that a sample of less than 100% can be as accurate, let alone MORE
accurate, than a sample of 100%. That's blowing smoke into anal crevices.
Period. That's the same warped logic that tries to convince people that
a 16Kb digital stream sounds as good as a 15KHz analog signal. There's
just not enough samples there to get an accurate representation of the
original analog. And I don't care what anyone says, there's no way that
digital will ever be 'as good as analog', let alone better, because to
get a perfect representation of the original analog waveform (especially
a complex waveform) you would have to have an infinite number of samples.



BTW, it doesn't matter that the human ear can make up for much of the loss
of proper waveform from digital. That's just working to the least common
denominator, and eventually you end up losing more information than the
human ear can make up for, since the lowest common denominator tends to
keep getting lower with each generation of a technology.


Per Mr. Orban... even including a link...

"Don't rely on the measurements you may be familiar with to evaluate ANY
bit reduction system incorporating a psychoacoustic model. The whole point
of
using the model is to throw away stuff that people can't hear anyway. Such
stuff is likely to appear on spectrum analyzers and the like, but
interpreting what your measurements mean (if anything) is non-trivial, to
say
the least.

There is an accepted objective method of measuring the performance of such
codecs (an ITU standard called PEAQ) but it isn't perfect, and, of course,
it
too must incorporate a psychoacoustic model.

Here's a bibliography on objective measurement of bit reduction systems
incorporating psychoacoustic models. Be warned -- this is heavy reading.

http://www.opticom.de/technology/literature.html


  #34   Report Post  
Old March 28th 07, 06:58 AM posted to rec.radio.shortwave
RHF RHF is offline
external usenet poster
 
First recorded activity by RadioBanter: Jun 2006
Posts: 8,652
Default Eduardo - don't let the door hit you in the ass on the way out !

On Mar 27, 4:33 pm, "David Eduardo" wrote:
"dxAce" wrote in message

...







David Eduardo wrote:


wrote in message
roups.com...
On Mar 27, 10:01?pm, "David Eduardo" wrote:


Yea Eduardo, your are on of those 55+ "fringe listeners" that don't
count - new ownership always "cleans house".


Our outgoing CEO is in his 70's... didn't he count?


Not according to you. As far as your model goes it should be 55 and out to
the
pasture (the funny farm in your case).


Totally different thing. One has to do with ability to run a business, and
the other with what ages advertisers wish to reach with their ad messages.



- Hide quoted text -

- Show quoted text -- Hide quoted text -

- Show quoted text -


So DE you are the proverbial Rock-Off-the-Old-Block
since it has been many Years since you were a "Chip". )

DE - May You Out-Live IBOC "HD" Radio ~ RHF
  #35   Report Post  
Old March 28th 07, 08:44 AM posted to rec.radio.shortwave
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 962
Default Eduardo - don't let the door hit you in the ass on the way out!

David Eduardo wrote:
"D Peter Maus" wrote in message
...
David Eduardo wrote:
"D Peter Maus" wrote in message
...
Any effort I have seen (some done on purpose) to disprove a music test
results when the test itself follows standard techniques has failed.
That's because the axiomatic assumptions are the same in each case.
The statistical science is the same. Of course the results are going to
be the same
Did you know most of the Census data was produced by the Census long
form, given to only about 12% of all households and /or persons?

The data is considered reliable enough to use for a huge variety of
government progams.


Yeah, I've read that.

And most of the debate on either side.


Most of the debate has to do with politics and the status quo. Politicians,
who would have to initiate a change, do not want one as they are going to be
concerned about redistriting and changes in Federal funds. The debate has
very little to do with accuracy and a lot to do with insuring reelection.




On BOTH sides.


In fact, the Census Bureau fairly conclusively showed that the Census
could be done more accurately by a sample than a census... the problem is
the constitution requires, specifically, a census.


And there is a reason for that.


Yeah, the ability to poll did not really exist when that part of the
constitution was written... and a census was simpler with a population that
had limited mobility and lower population densities.



Not exactly the point I was trying to make, no.

You can't do a head count more accurately by statistical sampling
than you can by counting heads. One has a margin of error, one does not.
And that's the point. Whether or not the ability to manipulate
numbers was advanced enough at the time of the Constitution is not the
point. The point is, you can't get more accurate than a direct count.

Now, whether or not the count is actually taking place...that would
be a good discussion left for a time when the beer flows freely and
neither of us is sober enough to do any damage.



In the sense that the test is implemented by the staition program
management, you can say that your point is accurate. But as to the
selection of what songs to play, that is entirely done by the listeners.

By a sample of the listeners, measured against the executively defined
'tiers'. The difference between a list of music titles and music
programming.


The programming is the mixing of the songs. The frequency of play is in
proportion to popularity. There really is no other way. The music itself is
picked by the listeners. the way it is blended together is the programming
function.



Yes, I believe I just said that. Or am I in a different room.


In the sense that listeners are involved, yes, you're point is valid.
But the statement is incomplete.


I don't think so. As long as play is in proportion to popularity (which is
the entire purpose of a test... to tell how much each song is wanted), it is
totally responsive to the listeners' picks. The programmer decides how the
songs should flow together...



Exactly my point.



In some cases, the adherence to the test is so total that a minimum average
score is put on each hour that matches the average of the testing songs.
If you pick the sample to represent faithfull the group under study and
every subset of interest, you don't need 100%. When you can repeat the
test, with the same sample specifications again (and again and again) and
get the same results, you know the sample does faithfully represent the
universe under study.


Yes, David. We agree on the science, and how it's done. My point is
directed at the statement. And that it is incomplete. Regardless of how
you scientifically measure, gather, and interpret the data, it's still a
matter of data implemented based on decisions of PD's and Consultants,
whether at the local level, or not.


I still do not follow this. If songs are played in total proportionality to
scores (in reality, it might be by quintiles or something similar), then the
test is not being interpreted. It is just a ranker of song popularity, with
the best being programmed the most.

That's not the point. There is no use in doing a census when a sample
gives the same results, reliably, over and over. The other factor, of
course, is that a census of all the listeners of a major LA station might
cost over $100 million dollars, while the highest billing station in the
market grosses $60 million, making a census totally impossible.


Regardless of the reasons, a sample does not produce the same result
as a census. Similar, yes.


No two Censues (Censi?) give equal results... the last one was as much as
+/- 4% off. One reason is that it takes so long to do that many
characteristics of the universe have changed, due to migration, moving,
births, deaths, etc. I would say that the difference between two censuses is
about the same or more than the difference between a poll and a census.

And if the assumptions are correct and the sample is not contaminated by
individuals that fall outside of the norm, results can be quite close to a
census. But the two are not the same. Regardless of replicability. Close,
no matter how, is not identity.


In any poll, some persons are rejected as not falling in the recruit
specifications, either at the start or based on differences between
collected data and the recruit spcifications. For any imaginable applicaiton
in broadcasting, the cost of increasing sample up to and including a census
is not, then, justifiable.
And it's the subtlest of differences at the input, that can make the
most dramatic differences at the output.


That is where recruit verifications are important. If properly conducted,
multiple tiers of recruit verification are done. This includes verification
of a percentage of recruits by a different person, and reverification at the
time of data collection and further verification via "trick questions" in
the collection process and data cleansing after the process is done to
remove people who lied or were improperly recruited.
In this case, I have to say that if a music test with 100 people (the
average size) did not work, ratings-wise, nobody would do them. Marketing
implies hype and puffery in your statement. The reality is that the
results are the same for 200, 400, 1000 listeners. So there is no hype or
distortion and there is very accurate data that reflects the total
listener base of a station (and, sometimes, its direct competitors).

Not hype and puffery, so much as misdirection, and illusion. The
'show' in show business.


The show is in how a station is ut together... the imaging, etc. The
underpinings of rotations, songs and such are pure math.
The results are the same because the axiomatic assumptions are the
same. Statistics is a kind of an elegantly deceptive science. Because the
assumptions are accepted as axiomatic, the results are believed beyond
reproach. Neither is the case.


Generally, in a music test, there are no assumptions. You sample a portion
of your audience (generally the users who give most of the listening time,
P1's or, in English, the ones who listen to you more than any other station)
based on the simple fact that they listen. Then you get the right balance
within the sample for age and sex, and you have no assumptions... just a
reflection of the real audience you serve.
I"m not saying the model doesn't work, because the business and many
of it's successes are based on it. But the process is in fact a shorthand
to the cut and try of carefully crafted creative programming. Which we
both know to be far more unreliable and expensive than corporate bean
counters and lawyers are willing to tolerate.


My first experience with music testing was in a format I had programmed by
ear and gut for two decades... always resulting in #1 stations. I guessed
the scores (tiers, really) of the first 100 songs. I was off by over 20% on
more than 75% of the songs. After implementing, on a station that was
already successful, the ratings increased dramatically. I knew how to
program the format, but did not know enough about how listeners felt about
the music. The combination of both was magic.

Similarly, in a case much more recenetly, a 20 share researched station in a
really large market got a competitor programmed by a guy who was the
recognized expert in the music we played. I mean, he knew every nuance of
every song and group... and he got a 1.8 to our ongoing 20. No research,
lots of gut feel for the music. It was amusing, because the artists figured
this out, and we were the one with dialy live unplugged sets in the studio,
the station with the unreleased long versions of songs, and all kinds of
twists on the music... but we only played hits, no matter what version.
Still there are wild successes that defy the research. iPod was one
of them. Zero interest in portable hard drive based digital music players.
Until someone introduced one. How the research of one company can be so
wrong, while the research of another can be so right has to do with the
methodology.


I can't engage in a discussion of a market I do not know. But it is awfully
rare to find any format today that does not use some form of listener
consultation or feedback to establish parameters.
Much of what passes for research, today, is corporate investigation
into how to reproduce a previous success. Quite literally, asking the
questions which will produce the desired answers.


I have never seen that, but I don't work outside one company in the US. And
since I supervise our research, and am paid for results, doing anything but
what the listener wants would not be in my benefit or that of the company.
Much research works. But it's flawed. Because it dismisses both
undesired and unsought results.


Much of the reasearch I do is based on finding problems before they become
disasters. Like a health check up, we are looking for the bugs under the
rocks, and we spend a lot on the shovels we use to turn over the rocks.
Jim Collins in "Good to Great" said that most research is a waste of
time, because it's bad research. It proves nothing, offers nothing but
what is expected. That the hallmark of good research is that it produces a
result you didn't expect. And that the hallmark of great research is that
it produces a result you don't like.


The purpose of a music test is to find out the bad news about bad songs...
and get rid of them... as well as the good news on what to play a lot. In
perceptual research, the idea is to identify competitive challenges,
weaknesses, etc. and fix them, plus reinforcing the good things. Nobody
wants more to find out bad news than we do, if there is any.
Very little research does either. Instead, it seeks to prescreen and
select a very carefully crafted sample, ask highly directed questions, to
produce results that don't really break any ground. Because the
methodology is circular. And radio stations are the master at this. Select
a group of P1 listeners, and refine them to a focus group. Take the
results of the focus group and shape the survey questionaire. Apply the
survey questionaire to more listeners.


I have never done a focus group. They don't work. And focus groups are not
used for music tests, either. Perceptuals are best done with face to face
personal interviews by a very skilled interviewer.

The reason we test heavy listeners is that the 30% of total listeners who
are heavy or P1 give over 70% of the listening time. You will always get P2
or lighter listeners if you do a very good job on the core. It's the bell
curve, plain and simple.
Well, ****...what the hell results come from that?

I've been involved in nearly as many focus groups, perceptual
surveys, and music tests as you. And damnation if everyone of them didn't
work.


I have done 50 music tests since January. I have done over 400 in the last
36 months. I don't do focus groups. But we do loads of poersonal
perceptuals, and have a staff of nearly 50 to help do this.
Until the didn't. And we got our asses handed to us by a station that
did everything that the 'listeners' said they didn't want. Everything the
focus groups said were wrong. Everything the survey results said wouldn't
work.


There are bad cars, bad pizzas and bad research companies. And there is bad
implementation. That does not impinge on the reliabilty of good research,
implemented by good radio people. All it proves is that, returning to the
bell curve, that half the population has an IQ under 100.

All of which raises question about the quality of research, and the
real effectiveness of a sample.

100, 200, 400, or 1000....it doesn't matter. Replicable results are
no surprise if the axiomatic assumptions are the same. And the results may
work. But they're not the same as a 100% sample. If they were, no radio
station, no business, doing research, would ever fail for lack of
audience.


Much more of a station is the execution of the format. I can show you how
the same good research used by a bad PD produced half the ratings of the
same research used by a good PD at the same station in the same market.

I can give a magnificent palette and a wonderful set of brushes to a monkey,
and he will still not be Van Gogh.

Airchecking and training talent, doing compelling promotions, gettting
involved with the artists, making every hour flow beautully, refreshing
promos, making sure the audio is right for the station listener group,
holding a tolerable commercial load, etc., etc. are what makes a good music
list work... it is the whole staiton, not the research... the research is
just one tool in a kit. Necessary, but so are all the others.
So, getting back to my point...there are still executive decisions
to be made in programming music. Those executive decisions are made by the
station, or it's parent. And only a sample of the listeners actually have
input. That sample may pick the songs they pick, but the other part of
the process is the executive decisions about category, rotation, and
execution. The executive decisions are what separates music data from
music programming.


The rotations are a product of scores, nothing more and nothing less.
categories are simply collections of like-testing songs that move like
gears, every song a tooth. There are really no decisions other than, "here
is the list... play them in order of appeal." In fact, a music test should
instruct the listeners to indicate "how much do you want to hear this song
on the radio today."
So, I stand by my assertion that your statement may be accurate,
but, at best, incomplete.


The only possible area of "incompleteness " would be sample size. But
testing has shown that doubling or tripling has not effect on the results.
Going any further would be beyond the economics of radio, so it is not
really incomplete but, rather, impossible.



Wow. You're amazing. You've debated every point that wasn't at issue,
here. Are you SURE you're not Michael Bryant?

To review....the point I was trying to make, which apparently got
lost in a lot more things than I had intended to say....


Your original statement was that you don't program the music, the
listeners program the music.

My rebuttal, which need not be repeated here in it's detail for the
fifth time, is that, Your listeners DON"T program the music. But that
rather a sample of your listeners have influence in the songs you play.
But the Programming of the Music, is still based on decisions of PD's
and Consultants.


Or for those in Rio Linda....a group of your listeners pick songs,
YOU PROGRAM the music based on them.


Damn, David... I love you like a brother, but ****....sometimes,
you're such a Consultant.










  #36   Report Post  
Old March 28th 07, 09:29 AM posted to rec.radio.shortwave
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 855
Default Eduardo - don't let the door hit you in the ass on the way out !


"SWLforever" wrote in message
news:GgoOh.3767$J21.3336@trndny03...
D Peter Maus wrote:

Ok. Let's look at that.

How deep do you go on the list? You get one hundred songs rated, how
deep do you go? How do you tier them?

How do you rotate them. With what do you mix them? How old do you go?
What's your current/gold ratio? How many categories of Recurrent do you
have? Do you play the top 10 straight up, or do you break it up into top
10 and top 5 like WLS did in the 70's. With different rotational periods
for every category.


Who decides what songs are on the survey list? That decision would
ultimately have more impact on the songs that get played than the
influence of the station PD's.


Which is precisely my point. The 'listeners choose the playlist'.. from a
prepared playlist. It all goes back to the stations telling the listeners
what they will like. That they let them choose a few from that limited
selection only reduces everyone else's choices even further.



--
Posted via a free Usenet account from http://www.teranews.com

  #37   Report Post  
Old March 28th 07, 11:22 AM posted to rec.radio.shortwave
external usenet poster
 
First recorded activity by RadioBanter: Jun 2006
Posts: 7,243
Default Eduardo - don't let the door hit you in the ass on the way out!



D Peter Maus wrote:

David Eduardo wrote:
"D Peter Maus" wrote in message
...

A sample.

A sample, well designed.

Represents a stations entire listenership.


My statement stands. A sample influences your decisions. How the sample
is designed and how it represents are decisions based on assumptions
accepted by statistical science.


Statistics _is_ a science, and it accepts the fact that there is always a
margin of error in polling, The degree of error that is acceptable depends
on the use that will be made of the information. The data obtained is,
itself, accurate withing the margin of error. Radio ad sales is tolerant of
a degree of error, as there are more important variables involved in
advertising than just the the margin of error of a survey sample.
Assumptions that cannot be proven, nor demonstrated to be true in any
given instance.


Actually, if you look at margin of error, a properly designed poll...
whether for music to play or audience size, can be pretty much proven by
replication procedures.

The differentiation between statistical analysis and census. One is a
scientific extrapolation. The other is a headcount. One CAN result in the
same outcome as the other, within defined limits of acceptability. But
they are not the same. And can in significantly divergant results.


If there is divergence, it is due to not doing the poll correctly. In this
case, it is quality control. It's just like making a car... faults per 1000
vehicles, etc.

So, reiterating, music is not programmed by your listeners, it's
selection is influenced by a sample. But the decisions are made by
consultants and PD's.


Since replication can verify using a sample to determine the acceptability
of songs, then the issue is implementation... a separate matter. Neither PDs
nor consultants change test results. It is almost plug and play once you
have the results.
Selling the process to your listeners: "Music is programmed by the
listners."


I did a little experiment... in Argentina, we did a 100 person music test.
We also did the test on the air, and ran the test form in a large newspaper
(circulation 1.1 million) We got 40,000 forms back. The test matched the
newspaper results. Then we pulled 100 test forms at random from the 40,000.
The results were also the same.
Critical analysis asserts that cannot be the case.


Any effort I have seen (some done on purpose) to disprove a music test
results when the test itself follows standard techniques has failed.


That's because the axiomatic assumptions are the same in each case.
The statistical science is the same. Of course the results are going to
be the same

David, we're not arguing the test. Nor the science. But the
PD/Marketing claim that the music is programmed by the listeners.

It's not. It's programmed by the PD's and Consultants based on the
results of a sample of listeners.

The two statements are not the same. Your statement implies a sample
of 100%. Which is not the case.

As I said, the results may be, give or take, about the same. But they
are not the same. Anymore than a statistical extrapolation is the same
as a census.

One of the reasons you take as much **** here as you do, is because
of statements that sound more like marketing than discussion.


Which is precisely why he's worn out his welcome everywhere he goes.

He knows that, I know that, but his defence mechanism will never, ever, allow him
to admit it.

It's a mental illness.


  #38   Report Post  
Old March 28th 07, 03:12 PM posted to rec.radio.shortwave
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 726
Default Eduardo - don't let the door hit you in the ass on the way out !


"SWLforever" wrote in message
news:GgoOh.3767$J21.3336@trndny03...
D Peter Maus wrote:

Ok. Let's look at that.

How deep do you go on the list? You get one hundred songs rated, how
deep do you go? How do you tier them?

How do you rotate them. With what do you mix them? How old do you go?
What's your current/gold ratio? How many categories of Recurrent do you
have? Do you play the top 10 straight up, or do you break it up into top
10 and top 5 like WLS did in the 70's. With different rotational periods
for every category.


Who decides what songs are on the survey list? That decision would
ultimately have more impact on the songs that get played than the
influence of the station PD's.


This is the area where most mistakes are made.

1. All songs currently being played.
2. All songs that you have played in the past that did not pass but which
were "close enough" to be retested since resting them (not playing them)
might be enough for them to test.
3. Songs direct competitors you share audience with play that you don't.
4. Songs you don't play that are "in the ballpark" sound-wise that get
exposure by TV (theme song of a show) or movies or ???.
5. Anything else you can find.

It is hard to build a list because it is not easy to find as many songs that
fit 1-5 above as it would seem. Keep in mind that you can not test new song
(unheard by your audience) till they have been amply played, as music
testing is a way of finding out how much people like songs they have heard,
not a crystal ball.


  #39   Report Post  
Old March 28th 07, 03:17 PM posted to rec.radio.shortwave
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 726
Default Eduardo - don't let the door hit you in the ass on the way out !


"Brenda Ann" wrote in message
.. .

"SWLforever" wrote in message
news:GgoOh.3767$J21.3336@trndny03...
D Peter Maus wrote:

Ok. Let's look at that.

How deep do you go on the list? You get one hundred songs rated, how
deep do you go? How do you tier them?

How do you rotate them. With what do you mix them? How old do you go?
What's your current/gold ratio? How many categories of Recurrent do you
have? Do you play the top 10 straight up, or do you break it up into top
10 and top 5 like WLS did in the 70's. With different rotational periods
for every category.


Who decides what songs are on the survey list? That decision would
ultimately have more impact on the songs that get played than the
influence of the station PD's.


Which is precisely my point. The 'listeners choose the playlist'.. from a
prepared playlist. It all goes back to the stations telling the listeners
what they will like. That they let them choose a few from that limited
selection only reduces everyone else's choices even further.


Of course, that is not true. The lists are often so big that the respondents
have to come back several nights in a row to hear them all... I've done
1,500 song tests for one station which, over the last few years has played
"what if" with more than 4000 songs in an Adult Hits format.

The reason most cases are far less is that, for example, if you test a
library cut ("oldie") several times and it is absolutely stiff, you don't
test it any more. So it is, in fact, hard to come up with a list of "what
if" or far off center songs for every test because you already know after a
while which songs are stone cold dead and the field becomes more and more
limited.


  #40   Report Post  
Old March 28th 07, 03:35 PM posted to rec.radio.shortwave
external usenet poster
 
First recorded activity by RadioBanter: Jul 2006
Posts: 726
Default Eduardo - don't let the door hit you in the ass on the way out !


"D Peter Maus" wrote in message
...
David Eduardo wrote:

Most of the debate has to do with politics and the status quo.
Politicians, who would have to initiate a change, do not want one as they
are going to be concerned about redistriting and changes in Federal
funds. The debate has very little to do with accuracy and a lot to do
with insuring reelection.


On BOTH sides.


Exactly! And it has very little to do with statistics, margins of error and
sample frames, none of which the average politician is likely to understand.

Yeah, the ability to poll did not really exist when that part of the
constitution was written... and a census was simpler with a population
that had limited mobility and lower population densities.


Not exactly the point I was trying to make, no.


Yes, but that is the reason we have the obligation to do a census... it was
the only thing available.

You can't do a head count more accurately by statistical sampling than
you can by counting heads. One has a margin of error, one does not. And
that's the point. Whether or not the ability to manipulate numbers was
advanced enough at the time of the Constitution is not the point. The
point is, you can't get more accurate than a direct count.


But there is no way to do an accurate census in the US today. It's a 6 month
process with follow up. In that time, a huge percentage of Americans move,
people become homeless, people become ex-pats and live abroad (which, by the
way, is an area filled with error... nobody really knows how many Americans
live abroad) and so on.

A poll can project bases on small samples, done quickly, and be far more
accurate than a census.

Now, whether or not the count is actually taking place...that would be
a good discussion left for a time when the beer flows freely and neither
of us is sober enough to do any damage.


I can imagine that. Probably more fun than this discussion, too. ;-)

And you do understand that the Cenus is not without considerable error. Our
society is just too complex to count without embedding a chip in everyone
(just kidding, of course).

The programming is the mixing of the songs. The frequency of play is in
proportion to popularity. There really is no other way. The music itself
is picked by the listeners. the way it is blended together is the
programming function.


Yes, I believe I just said that. Or am I in a different room.


But that does not change the implementation based purely on test score as
texturizing an hour does not change songs, just their position in the hour
next to other songs for a better blend.


In the sense that listeners are involved, yes, you're point is valid.
But the statement is incomplete.


I don't think so. As long as play is in proportion to popularity (which
is the entire purpose of a test... to tell how much each song is wanted),
it is totally responsive to the listeners' picks. The programmer decides
how the songs should flow together...


Exactly my point.

But doing that is a question of moving songs by a few positions in an hour,
not changing the rotation. Rotations change not a wit by massaging each hour
a bit for the best flow from song to song. All that is is flipping position
on a few songs, not discarding them.

The only possible area of "incompleteness " would be sample size. But
testing has shown that doubling or tripling has not effect on the
results. Going any further would be beyond the economics of radio, so it
is not really incomplete but, rather, impossible.



Wow. You're amazing. You've debated every point that wasn't at issue,
here. Are you SURE you're not Michael Bryant?


I don't think so...

To review....the point I was trying to make, which apparently got lost
in a lot more things than I had intended to say....


Your original statement was that you don't program the music, the
listeners program the music.

My rebuttal, which need not be repeated here in it's detail for the
fifth time, is that, Your listeners DON"T program the music. But that
rather a sample of your listeners have influence in the songs you play.
But the Programming of the Music, is still based on decisions of PD's and
Consultants.


But it isn't. The music plays in exact proportion to how much it is liked.
There are no changes made there...


Or for those in Rio Linda....a group of your listeners pick songs, YOU
PROGRAM the music based on them.


No, we shcedule the music in strict adherence with the amount they like the
songs. Programming is the glue that sticks them together.


Damn, David... I love you like a brother, but ****....sometimes,
you're such a Consultant.


Thanks for the first part... and the second part is not an insult. There are
many good consultants...


Reply
Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Does Eduardo... dxAce Shortwave 9 February 28th 07 12:52 AM
All day all night Eduardo [email protected] Shortwave 3 December 18th 06 09:49 AM
All day all night Eduardo RHF Shortwave 0 December 18th 06 08:55 AM
All day all night Eduardo Telamon Shortwave 0 December 18th 06 05:45 AM
David Eduardo: Why doesn't KFI do this? David Shortwave 1 September 11th 06 09:16 PM


All times are GMT +1. The time now is 05:08 PM.

Powered by vBulletin® Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 RadioBanter.
The comments are property of their posters.
 

About Us

"It's about Radio"

 

Copyright © 2017