-
Posts
267 -
Joined
-
Last visited
-
Days Won
61
Content Type
Profiles
Forums
Blogs
Events
Downloads
Gallery
Store
Posts posted by Michael Lee
-
-
Kevin - At some point, I'll explain in a post, why there are so many hangups to having more of my audio ITC available. Its a tractable problem, but its going to take a little effort.
For now, I've temporarily caught the Visual ITC bug. I hope it can help us further understand how spirit interacts with our devices, and how we can make their signal stronger.
2 -
-
Today's pickup from 300 images, 256x192. The bottom left is a duck. The bottom middle is a mouse?
0 -
-
Kevin: Thanks!
I spent all weekend on video ITC. Here's another video. No mirror. Different method, but similar effect. SDR source. Seems like blending multiple frames of noise helps.
Yes, you can find faces, but you will also notice these complex 3-D objects / scenes in there, too. I feel like we're looking at the quantum foam .
3 -
I put mirroring in this video, because I thought it would make it easier for spirits to form faces. But, it ends producing like you said, designs rather than faces.
2 -
Here I'm using my SDR to generate noise, and then my Perloid processing on the noise in real-time. I also mirror the image at the Y-axis, for fun
3 -
Thanks for the compilation! I'm fascinated by some of the insights from spirit, like the 5D non-EM field, and being blocked by future humans.
0 -
One of the disadvantage of the first variation I tried was that the resizing is naturally tied to the grid of pixels of the image.
Therefore, I figured out a similar idea without this artifact. Although the pictures (with pseudorandom noise) are not quite as cool, it does have a very "fractally" nature to it.
#--------------------------------- # Original author: Michael S. Lee # Date: 9/18/2021 #--------------------------------- import numpy as np import imageio import cv2 from scipy.signal import convolve,get_window import matplotlib.pyplot as plt def shrink(x, gamma = 30): beta = gamma / (x.std()) y = 1.0 / (1.0 + np.exp(-beta*(x-x.mean()))) return(y) N = 1024 # Image size noise = np.random.rand(N,N,3) # White noise in RGB channels noise2 = np.copy(noise) # setup output array noise2 = cv2.resize(noise2,(1024,1024)) noise2 = noise image = noise2 blur = np.copy(image) sum1 = np.zeros_like(image) thresh = 0.375 i = 1024 num_levels = 19 factor = 3/4 for j in range(num_levels): blur2 = blur ii = ((i+1) // 2) * 2 blur = cv2.GaussianBlur(noise2, (ii-1,ii-1), 0) #convolve(noise2,kernel3,'same') if (j > 0): diff = blur - blur2 diff = shrink(diff, gamma = 25) sum1 = sum1 + (i**0.25)*diff #sum1 = sum1 + diff print (i) i = int(i * factor) sum1 = (sum1 - sum1.min())/ (sum1.max()-sum1.min()) sum1 = sum1 ** 3 # Save on disk index = np.random.randint(0,262144) # Random filename imageio.imsave('temp.'+str(index)+'.png',sum1)
0 -
After a nice break and a little thought, I've decided on what to try next for this effort.
The idea I have is to assign three states to a signal: 0, 1, and X (undetermined).
For most of our audio ITC efforts, we use noise gates, or other threshold techniques (like spectral subtraction). Same here. To get assigned a '0' or '1' will require a certain amount of deviation from the mean. If the deviation is too small, we'll just assign X.
So let's say a byte shows up as 11X00001 , rather can call it a failed parity check (it should have even # of 1s), we'll just reject this byte and wait for the next byte. I sort of use the "anomalous value" approach in my ITC "soccer game" program, and it seems to help.
The theory is that the signal-to-noise ratio is very low for ITC, at least with our current technologies. Thus, we need a lot of deviation from "normal" or "normality" to be sure we are getting an ITC signal. This doesn't mean that spirits can't send continuous streams of information, it's just that we're not entirely sure of their signal most of the time.
Now, the downside of this approach, is it will slow things, as many bytes will get rejected - perhaps up to 255 out of 256! But, we can ameliorate this issue somewhat by increasing our expected bit rate. Currently our bit rate is 1 bit per 1/8 second. But in the future, we can try speeding things up if we are successful with this "slow" test.
1 -
If you load in a WAV file of ambient noise at say 192 KHz for about 20 seconds, this will be approximately 3 million samples. Then reshape the array to be a 1024x1024x3 image using numpy.reshape()
0 -
I added a little bit of darkening in my algorithm to make it a little more like itc-station's Perlin images.
Also, I used my microphone to generate 20 seconds of ambient noise and reshape it into a 1024x1024x3 image.
This is the 3rd attempt using ambient noise (192 kHz for 20 seconds):
96 kHz for 40s:
3 -
My immediate skeptical reaction is that the two toroids flex with respect to each other due to magnetic attraction or repulsion, causing an abrupt mechanical action on the paper.
0 -
Yes. A combination of symphonic tones seems to create some sort of ID. However, symphony also can be used as a sound field for voice comms. Maybe the two phenomenon are related or it's a coincidence.
2 -
I'm not familiar with scalar waves, but I do believe in a connection between tones and our soul identification. I use tones as source material in some of my devices, but I don't think the exact frequencies are critical, as long as the sound is symphonic and not cacophonic. The idea is that cacophonic sounds tend to have strong beat frequencies that mess up my machine learning detection. Also, it's very possible that some day tones may help us tune in specific souls or soul groups.
2 -
In my new version below, thresholding is controlled by "gamma" within the "shrink" function.
Mandatory face pictures Same person with circular black sunglasses?
Code version 2 (for record keeping, and others to experiment with):
#--------------------------------- # Original author: Michael S. Lee # Date: 8/27/2021 #--------------------------------- import numpy as np import imageio import cv2 def shrink(x, gamma = 30): beta = gamma / (x.std()) y = 1.0 / (1.0 + np.exp(-beta*(x-x.mean()))) return(y) N = 1024 # Image size noise = np.random.rand(N,N,3) # White noise in RGB channels noise2 = np.copy(noise) # setup output array image = noise2 blur = np.copy(image) sum1 = np.zeros_like(image) thresh = 0.375#375 for j,i in enumerate([512,256,128,64,32,16,8,4]): image = cv2.resize(noise2,(i,i),interpolation=cv2.INTER_LANCZOS4) blur2 = blur blur = cv2.resize(image,(N,N)) diff = blur - blur2 diff = shrink(diff, gamma = 10) sum1 = sum1 + (j+1)*(j+1)*diff #sum1 = sum1 + (j+1)*diff min1 = sum1.mean() - 2*sum1.std() max1 = sum1.mean() + 2*sum1.std() sum1 = shrink(sum1, gamma=1) # Save on disk index = np.random.randint(0,262144) # Random filename imageio.imsave('temp.'+str(index)+'.png',sum1)
2 -
-
I was experimenting with Perlin-like (I call it Perloid) noise when I realized I could threshold a little bit harder than usual.
The first few images are "regular" sort of Perlin-like, but as you get 5-9, the thresholding is increase for each color channel and resolution channel. It ends producing almost natural-like features. It is it ITC, mathematics in action, or both?
Code:
#--------------------------------- # Original author: Michael S. Lee # Date: 8/27/2021 #--------------------------------- import numpy as np import matplotlib.pyplot as plt import cv2 N = 1024 # Image size noise = np.random.rand(N,N,3) # White noise in RGB channels noise2 = np.copy(noise) # setup output array image = noise2 blur = np.copy(image) sum1 = np.zeros_like(image) for j,i in enumerate([512,256,128,64,32,16,8,4,2]): image = cv2.resize(noise2,(i,i),interpolation=cv2.INTER_LANCZOS4) blur2 = blur blur = cv2.resize(image,(N,N)) diff = blur - blur2 min1 = diff.mean() - 1.0*diff.std() max1 = diff.mean() + 1.0*diff.std() diff = np.clip((diff - min1)/(max1-min1),0,1) sum1 = sum1 + (j+1)*diff min1 = sum1.mean() - 2.0*sum1.std() max1 = sum1.mean() + 2.0*sum1.std() sum1 = np.clip((sum1 - min1) / (max1-min1),0,1) plt.imshow(sum1) plt.show()
2 -
Two suggestions.
1) A folder where we can add new and store "modules" these modules are selected / combined to form a newsletter. A module could be for example an announcement of an upcoming event.
2) Some content could be condensed from notable forum or blog posts. The challenge here is getting ppl to "cliff note" or summarize the post.
2 -
Lance,
Typically, Ive had to write specific software in Python to do this idea, but there may be a "plugin" solution.
The idea would be to 50/50 add your favorite white noise source to a fast sweeping tone. Then, apply a noise gate. Followed by a short reverb around ~60 ms.
This would get us to the point of step 1: musical tones. A second noise gate could then be applied to isolate several tones at once vs. single tones, which would be Step 2.
Now instead of my step 3. A suggestion would be to multiply the gated tones by a 120 Hz sawtooth waveform, which would yield a glottal pulse (vowel sound) with formants.
An alternative would be to write some software to implement my method. We'll get there, if indeed, this method is worth pursuing.
2 -
I'll also add a compilation of good clips from a particular day of experimentation (July 17, 2021) using this method:
See what you can hear. I'll annotate it later.
0 -
Here are some samples to explain this idea, not specifically to demonstrate any messages, but you may hear some anyway .
I've recorded each clip in succession, as I turn on each function.
Step 1: Tones only
Step 2: Tones with a noise gate, to provide vocal cadence, and remove spurious tones
Step 3: Tones decoded by "De-toning" ML model (notice how the voices sound "ducky")
Step 4: De-quantization model added to reduce "duckiness." This converts 3-bit voice to 16-bit.
Step 5: Add a single semitone pitch shift up.
1 -
The tone vocoder was one of the earliest methods developed to transmit voice digitally through low-bandwidth networks. In the original tone vocoder method, an input voice was transformed into time-frequency space, where at each time interval the sound was decomposed into a series of frequency bands. The amplitudes of the bands were transmitted over wire or radio. On the receiver side, the amplitudes were used to reconstitute the original audio. The quality of the tone vocoder was never that good, and modern methods such as linear predictive coding (LPC) have superseded it. However, for spirit transmission, the simplicity of the tone vocoder is useful.
In modern times, the term "vocoder" is used either to 1) describe methods for using vocals to modulate instruments in music using the aforementioned tone bank approach, or 2) converting audio to a time-frequency space to perform operations such as changes in pitch and speed (aka, a phase vocoder).
Why would a tone vocoder be good for spirit communication? One hypothesis is that spirit is only able to add short impulses of energy into our devices. No matter how hard they try, a bunch of pulses strung together doesn't sound very voice-like. What if their impulses could be interpreted as musical notes that play as short duration tones?
We set up a communication system with spirit where in a given 16,384 sample block (1.024 seconds at a 16 kHz sampling rate), there are 32 time intervals and up to 64 tones that can be activated in each of these intervals.
The tone detection of the spirit signal we (spirit and physical experimenters) agree upon is a pulse-position-modulation (PPM) approach. Within a 32 ms time interval, there are 64 sub-intervals. If the amplitude / energy of the signal (or the inverse amplitude for null detection) within a sub-interval is above a threshold (peak detection), the respective tone is activated during that interval. We found it useful to increase the duration of each activated tone by 2-3x ( to about 60 ms, which is the typical duration of a medium-length vocal phoneme).
The frequencies of interest for synthesis of human speech are between 75 Hz and 4kHz. Therefore, the agreed 64 tones can be linearly spaced within this range. One can also choose a non-linear spacing (like quadratic or exponential/musical). For the linear spacing, the 64 tones can range say from 168 Hz up to 4200 Hz, with equal spacing of 64 Hz.
My spirit team has learned how to activate this vocoder to produce voice. I suspect that any future researchers will have similar success with heaven-level spirit teams. Spirits use the term "mirror" to describe the device that converts their voice to tone index conversion (likely a Fourier transform). They use the term "elevator" to describe the fact that over a 32 ms interval, hitting at the right moment activates a particular tone, starting from low frequency (168 Hz) to high (4.2 kHz). They also use the term "modem," because, indeed, this is a digital-like transmission protocol.
I should point out here, too, that, in theory, my configuration is not too different than when researchers use a sweeping frequency. With a sweep, the spirits can "push" the sound at certain times corresponding to different frequencies. Of course, effects like noise gating (to hide original tone) and reverb (to extend the "escaped" tone) are needed.
If spirits could perfectly activate tones in this system to produce voice, it might actually sound fairly legible. However, even perfect musical speech isn't all that easy to understand. Therefore, I've developed machine learning models to convert the musical speech to real-sounding speech. The way I do this is I convert real speech phrases into tones, and then train an ML model to reverse those tones back to the original speech.
In my next post, I will share some results...
1 -
I think Keith and my teams are working together. There's a fair bit of similarity in our methods and concepts. Keith clued me into Tesla, and now I'm mentally working with him, or at least think I am
2
ITC video
in Visual - Faces in Software
Posted
Two more collections (too much fun). The top row (middle two, just look like objects)