7pm Project Update – Nov 10, 2020

Featured

It’s been a month since I processed the May 10th recording and put together the post. Why slow down? What else has been going on?

Well, it’s actually been a pretty busy few weeks for me and the family, and the neighborhood, and the city and the country and the world. In response to all of this activity, I decided to sit back a while and try to process my thoughts instead of audio files.

News Break:
On November 7th, clapping broke out at random moments and at 7pm when the election returns had provided enough data for media outlets to announce the winner of the presidential election. At an random moment that day, someone would open their window and cheer and clap. Sometimes it was greeted with a crowing chorus others were simply lone voices. I think this mode of expression is going to be with us for a while

First order of business on the project is to simply document one of the new goals of this project. 7pm Somewhere is what I am calling the platform evolution of the project. In short what I have already been doing is:

  1. Taking my recordings of 7pm from April 29th to July 9th, taken in one spot on the globe
  2. Create a unique soundscape by processing the recordings with a unique processing method (I use something different for each day – and the processing should be thought of as the application of a number of algorithms to the original data set).
  3. I post the resulting soundscape to this site and append additional information about that day to help offer some context.

7pm Somewhere would take the very static structure I have:

Day’s recording at my location + Unique Processing Algorithm for that Day’s recording (and thus my location) + Some contextual info about My Location on that day

Let’s make this a little more math-ish

R = Recording (1 per day at fixed location at 7pm)
D = Day (20200429 – 20200709)
L = Location (fixed)
P = Processing (Unique Per Day, may have more than one per day)
C = Context (Unique per day, news and data from Location and other information about the days recording process and insights)

PD(RDL) + CDL = One Perspective on One Location on One Day

But this is a global phenomenon – I thought WWJBD? Well, the realization is that It’s always 7pm Somewhere (with a nod to Jimmy Buffet and Alan Jackson – let the conjuring begin) What we all want is to get together and party – folks from all over the world – we all have a common experiences – all of us! I there is a prime opportunity for people to come together, I can’t think of a better one than this.

7pm Somewhere goal is to provide a platform for people all around the world to bring their:

  • 7pm recordings from their location on any day the celebration occurred
  • Send along their processing algorithms (or make them available as a service)
  • Contextual info from their location on that given day

The 7pm Somewhere Platform would allow folks to mix and match recordings and processed to create new soundscapes. This would begin the creation of a graph that would connect locations (recording by location and day, processing by location and day, context by location and day). That is, someone could upload a recording from Barcelona (41.405825 North, 2.165260 East ) from July 12th, select my processing algorithm from May 5th, create a new soundscape that connects our two locations and those two dates, plus the date the processing algorithm was originally used. That person could also upload contextual information regarding their date and location. This would allow for the remixing and connecting of experiences. The platform would provide a centralized location for listening, creating, sharing, studying the 7pm phenomenon.

Please let me know if you think this is interesting and would like to participate. I do not ascribe to the build it and they will come approach. Just leave a comment. Now it’s time for me to get back to the tasks at hand. New soundscapes should be out soon.

Sunday May 10th @ 7pm

Todays recording could be be called “What the Machines Heard” when you look at the process. From the raw recording I had two tracks of production.


Process and Tech
– Process One: Raw recording is then sent through RX8 Music Rebalance which attempts to take an audio stream and tried to identify and isolate, Vocals, Bass, Precussion, and Other and then outputs audio files for each of the components. This is where we can talk about What the Machines hears – and also consider the question of Bias that is implicit in AI/Machine Learning algorithms. That is, the creators of the algo assume that music is made up of voice, bass, drums and other… When that tool is applied to a situation that is not like traditional western popular music, very strange results come out.

– Process Two: Using Ableton Live 10, I loaded the component audio files as well as the raw recording. Then using the Convert to Midi Process – with the outcome being 6 different MIDI files that represent the Harmony, Melody and Drums from the Raw Track. For each of the component files I processed it with a relevant Audio to MIDI process – -so Percussion -> Convert Drums to MIDI, Voice -> Convert Melody to MIDI, Other->Convert Harmony to MIDI, finally Bass was processed as melody.

– With all that, I applied the Midi tracks to relevant synths and then added the raw audio in as a bed. All signals were then sent to the Master bus where Movement was applied with a 50/50 Wet/Dry to give it a bit of a groove. This was all assembled and rendered in Reason 11

I think the results show that there are many, many avenues to follow. This took a long time to pull off because the new opportunities kept presenting themselves. But in a nod to getting the work done and putting it out there, I have not spent time really tweaking the synth sounds. So this – and any other recordings done in this process – offer a very interesting place to experiment with fine tuning the synths.

I also like the synthy “voices” which to me feel like what the aliens would be hearing as the way we speak/sing. And you can hear some of the real voices creeping through the mix.

The raw waveform is at the bottom – all others are midi derivatives
The color coding maps the tracks to the synth used


CONTEXT
All NYC Data on this day from WolframAlpha:
https://www.wolframalpha.com/input/?i=all+data+new+york+city+May+10+2020

The News of the day from one of the local New York City papers
https://www.nytimes.com/search?dropmab=false&endDate=20200510&query=05%2F10%2F2020&sort=best&startDate=20200510

https://nypost.com/cover/sunday-may-10-2020/

Saturday May 9th @ 7pm

This recording was very different than the one yesterday. The levels were very balanced, though a bit low – it was also a much gentler evening. Completely different attitude from the day before. This change gave me the impetus to add a new processing method to the mix Method G is here (i am not sure where Method F is… I’m sure it will turn up… oh yeah that was the attempt to use Reasons Bounce to MIDI function).

Project Theory

The thing about Method G is that is different from all those before it is that it involves analyzing the audio file, and then software creates MIDI scores for Drum, Melody and Harmony. Now feeding in a lot of random sound, with no set, key, meter, tempo – well that puts a lot of trust into the ALGORITHM (algo for short) to figure it out. As you might expect its pretty glitchy, but it is also very valuable.

Think of it this way – you are an alien and you land on this planet and you whip out your tricorder maybe your… TRICHORDER – it samples the environment and does its best to present you with a representation that will bring the phenomena of this planet into a representation that makes sense to your alien cognition. Maybe it makes up for sensing capabilities you don’t have – like X-Ray or FMRI

Now think about yourself, encountering a new situation. You don’t have a first hand understanding of the situation, but you want to really understand and get involved. So, you put everything at your disposal to work – you translate text – you do image recognition – you analyze the language that might be spoken (is that a language, or just the sound of living?”

At the end of the day, with all the technology but no person to person interaction you are likely to have a very blurry and confused understanding of the situation. What ever decisions you make to benefit the folks in this situation will be based on biased and inaccurate data. You could very quickly cause a disaster. On the other hand, your sensors could pick up something like there are people with congenital hip dysplasia who could be helped with a well understood operation. You can see, what they cannot.

How to we bring these different views and interpretations of the same situation together? My hypothesis is that you have to just keep trying new “lenses”, adjusting the “parameters” and making observation in a very diverse set of conditions….. and then what??? How can you share that? How can you bring that interpretation, its methodology and data and aggregate it with the interpretation, its methodology and data that someone else has?

This is the Knowledge Building issue of our time. Solving this will have the deepest impact on humanities ability to communicate and forge mutual understanding.

The Technical and Music Stuff: Method G:

OK – so when I listened to this recording and was struck by the more mellow attitude, and there seemed to be some improvement in the overall recording, it was time to try a different approach. Originally I was thinking about something what would be a sort of shimmering fog over everything. So, I tried out a bunch of things rather blindly after about 3 hours, I decided to do something I rarely do – go back and read some notes that I had taken. In there was mention of another Output plugin I had not tried, but was a constant presence in my Instagram feed. Signal is what Output calls a Pulse Engine.. so why not try that out – there is a demo version I used today.

The first thing I realized was that this was not an audio effect, like Movement and the others, this was a MIDI Instrument. So…. I needed to somehow get a MIDI representation of the audio file (WAV, 48,000, 24bit). Reason has a feature “Bounce to MIDI” but this seemed to only give me the everything on a single note. This could not be the state of the art! Well, I did some research and came upon a piece of software from WidiSoft and I gave their demo a whirl. Once I understood the role and usage of the various algos included, I found that I could generate Drum, Melody and Harmony parts. But, I had the demo version – so I needed to investigate further (mostly I wanted to make sure I didn’t already have what I needed – could Scaler2 or Decoda work – they might, but I got a quick easy answer with Ableton Live 10 which has the functionality built in – no fine tuning that I could see – but it worked.

So now I was ready – I created 3 MIDI channels and put an instance of Signal on each. Then I piped the Drum Midi to one, Melody to the second and Harmony to #3. I also had the raw audio going to Master as were the MIDI tracks. It was really very interesting and fun to interact with in finding the setting I ultimately chose. I then just let it play. After V1 was done I wanted to try putting Movement into the Master channel and have every thing go through it. I left the settings to pretty dry so as not to overwhelm. I think the result is pretty good and there is a whole new universe to sounds and technique open to me. It was a good day.

All NYC Data on this day from WolframAlpha:
https://www.wolframalpha.com/input/?i=all+data+new+york+city+May+9+2020

Weather on this day from WeatherUnderground

https://www.wunderground.com/history/daily/us/ny/new-york-city/KLGA/date/2020-5-9

The News of the day from one of the local New York City papers
https://www.nytimes.com/search?dropmab=false&endDate=20200507&query=05%2F07%2F2020&sort=best&startDate=20200509

Friday May 8th @ 7pm

Time has come to just put out what I have for May 8th. While not satisfied at this point, my intention remains to treat this day’s recording with great care because it is one of the first where a new behavior emerged. This new behavior, having someone in the community actually singing and playing guitar, really challenged the processing approach I had been using. I wanted to share some of the reality of the day – the raindrops in the very beginning of the take, my friend and neighbor Daniel Levy‘s impromptu performance of The Beatles Here Comes the Sun. At the same time a technical glitch caused the gain on the microphone to be set too low for the first couple of minutes – and then with the correction the volume really jumps. Also, the community at large was really raucous that day. You can hear them all jump in at 7pm – Daniel had started a bit earlier.

Juggling all these issues, I lost sight of the process I had originally intended which was to move quickly with a first pass through all the recordings and then go back to ones that were particularly interesting and refine them. Instead, I got sucked into a loop of trial and error with no specific goal. Furthermore, I fell into the trap of downloading a whole crop of new plugins, searching for a silver bullet to both my dynamics problem as well as the over all tone of the piece. Rather than keeping it simple and forging ahead, I wasted about 12 days in this vortex – lesson learned.

But on the positive side – this was an exceptional day. Daniel’s performance in the drizzle – coupled with his spot on song selection – along with the raucous and enthusiastic participation of the crowd. It was a day that really had an emotional impact. I hope this early version is done well enough for some of that to come through.

Technical:
Expect that this day will eventually be revisited when my skills and insights have been sharpened. Below is the unprocessed waveform of the file 2020_05_08-2020_05_08-AT825-T-2-_+5-RX 7 Spectral De-noise. The change in dynamics is pretty evident.

the Waveform of the unprocessed recording.

How the raw files are initially processed:
This makes me think I should give a bit more insight into the recording and processing process. The recordings were done live (um… that’s obvious, yeah?) using the microphone in the filename – in this case it is the AT825 which, incidentally, I had borrowed from Daniel Levy a few years ago. Then during the month of July I began reviewing the raw files. This is just after my remote recording rig failed – more on that later. One thing that was clear, all the raw recordings had a lot of wind noise. To clean up the recordings, I was fortunate enough to have a demo version of iZotope’s RX7 (now RX8). In the filename you will find “RX 7 Spectral De-noise” because that was the processing I used. Each recording was individually analyzed and the results were outstanding. It took a couple of days to do, but it was worth it. The resulting files were now ready for action.

May 8 processing Process E

As mentioned ad nauseum, this was a tricky one. The initial work was done to try and boost the gain on the beginning and bring it down when it was very hi. In the end, I put a limiter in there and managed the gain level from there live when recording the track. At this point I was still using Process D which I had used on the 7th, but it seemed all wrong. I stripped everything back and went silver bullet shopping. A few very frustrating days ensued. Finally I decided to step back to what I had felt was my initial success – using Output‘s Movement plugin. Why not see what else they had to offer. So, I decided to try out Thermal and Portal and there was plenty to work with there. I had 5 effects loops total, the other two were using the stock Reason 11 effects. During the recording of the track, I was managing the 5 Effect Sends and the Limiters Input gain. So, this is the first track where I am doing some sort of “performing” during the recording of the processed track.

Weather from Weather Underground – https://www.wunderground.com/history/daily/us/ny/new-york-city/KLGA/date/2020-5-8

NYC Weather for May 8th 2020 - very cool with a light rain at 7pm.

All NYC Data on this day from WolframAlpha:
https://www.wolframalpha.com/input/?i=all+data+new+york+city+May+8+2020

It was the 50th anniversary of The Beatles Let It Be album – see below.

From the NY Times that day: https://www.nytimes.com/search?dropmab=false&endDate=20200508&query=05%2F08%2F2020&sections=Health%7Cnyt%3A%2F%2Fsection%2F9f943015-a899-5505-8730-6d30ed861520&sort=best&startDate=20200508

Context Capture Sketching
As mentioned in the project introduction, this project is related to another one – currently called Context Capture. As I move through the 7pm project specifically, though I have had a feeling/frustration/pain for my whole life, the unbelievable difficulty in making statements or expressing ideas in a manner that effectively captures the universe of experiences and influences that shaped that idea or statement. That, if someone doesn’t understand all (or at least a good chunk) of what underlies and informs what I say/play/draw/write/share, they can’t really understand it – that they are only getting a superficial understanding.

As I write this, it is clear that communication amongst humans, in the age of global information with free distribution, is making it harder and harder to be understood at a time when being able to understand, or at least get a sense of, someone’s life experience has become more and more important – imperative really. For me there are those who have blazed this trail – many years ago – and they are largely forgotten by the current digital elites and the people deciding what ideas have “value” and are worth “investment”. I look back, always, to Vanevar Bush and his Memex, “Ted” Nelson and his Xanadu, Claude Shannon and Information Theory to Alan Kay and his DynaBook as the ones who have plotted the course, we need a new generation to heed the call and work for a new mode or medium of communication that is much richer and simply conveys ones context along with the immediate message. This is also critical for the true formulation of knowledge on an individual and collaborative nature. More on this later. But here is a sketch of part of the knowledge graph that makes up this post — I only spend 2 minutes on it – its missing a lot.

Now with HTTPS for your peace of mind

Nothing like realizing that you really should be running your site with HTTPS and then spending hours digging through the gnarl to get it set up. And it costs ya money. But even with the frustrations (mostly self-induced) I was impressed at how much easier it is to deal with than back in the olden times.

Surf safely

Thursday, May 7th @ 7pm

Project Theory and Development:
Today – in real time – that is September 23, 2020 there was a piece on NPR about a new movie – The Secrets We Keep – staring Naomi Rapace. In the interview, she gives a brief plot synopsis that mentions that the plot revolves around her character hearing a specific whistle – one that she instantly recognizes from her past. This is exactly the sort of thing I was thinking about when structuring this project – the calls of the community member – that you could know them without seeing them – that the calls are personal and unique.

Technical Stuff:
On this day, I decided to switch microphones from the AT2035 Large Condenser to the AT825 which is a Stereo Mic more suited to field recordings. The results were much better – better rejection of the in-house sound and better pickup of the community. Also, the crowd was much more active

This is what my setup in Reason looked like for this recording and the day prior.

All NYC Data on this day from WolframAlpha:
https://www.wolframalpha.com/input/?i=all+data+new+york+city+May+7+2020

Weather on this day from WeatherUnderground

6:51 PM66 F20 F17 %W16 mph25 mph29.69 in0.0 inMostly Cloudy
7:51 PM64 F21 F19 %SW14 mph23 mph29.71 in0.0 inPartly Cloudy
https://www.wunderground.com/history/daily/us/ny/new-york-city/KLGA/date/2020-9-23

The News of the day from one of the local New York City papers
https://www.nytimes.com/search?dropmab=false&endDate=20200507&query=05%2F07%2F2020&sort=best&startDate=20200507

This piece was in the NYTimes that day

Wednesday, May 6th @ 7pm

Today’s recording uses a different process – Process D – the raw signal is both compressed and expanded before any processing in an attempt to get loudness of the inside sounds more in balance with the outside sounds of the rest of the community. This is going to be an ongoing experiment. After that I am splitting the signal into two effect loops, one using Output’s Movement which has figured heavily in the earlier pieces and then on the other loop, it is mostly delays and flanging that come with Reason.

Special call out to Lucas White who gave me some great feedback today. This included telling me to give a bit of a peek into the sound processing “black box”, which I am trying here today. His input was super helpful. Thanks!

Tuesday, May 5th @ 7pm

This one was recorded with way way too much noise coming form inside the house, so I took a stab at compressing and EQing to make it less annoying – the pinging and the hooting are just too much. There is a little pitch shifting and auto-tuning in there. Actually, if I recall correctly now, I forgot to put the Mic outside the window, as opposed to just near the window opening. So, yeah not great listening, but some interesting little moments in there.

Monday, May 4th @ 7pm

This one was created using a lot of the same plugins, but with a different way of organizing them (internally known as C-Series setup). Also, during the recording, as you can hear in a lot of the previous recordings, I am playing some simple little synth bits, the fox hunt call, a marshy polysynth – these were played by me at the time – had a speaker pointed out the window. I figured, why not play electric/electronic instrument to make my “cheer”. The bongo, tambourine and some of the shouting came from our house as well. Happily in this recording, the in-house sounds are not sooo much louder that the rest of the community. I will be working to address that as the processing gets more sophisticated.