Jump to content

Michael Lee

Team
  • Posts

    267
  • Joined

  • Last visited

  • Days Won

    61

Everything posted by Michael Lee

  1. Thanks for the compilation! I'm fascinated by some of the insights from spirit, like the 5D non-EM field, and being blocked by future humans.
  2. One of the disadvantage of the first variation I tried was that the resizing is naturally tied to the grid of pixels of the image. Therefore, I figured out a similar idea without this artifact. Although the pictures (with pseudorandom noise) are not quite as cool, it does have a very "fractally" nature to it. #--------------------------------- # Original author: Michael S. Lee # Date: 9/18/2021 #--------------------------------- import numpy as np import imageio import cv2 from scipy.signal import convolve,get_window import matplotlib.pyplot as plt def shrink(x, gamma = 30): beta = gamma / (x.std()) y = 1.0 / (1.0 + np.exp(-beta*(x-x.mean()))) return(y) N = 1024 # Image size noise = np.random.rand(N,N,3) # White noise in RGB channels noise2 = np.copy(noise) # setup output array noise2 = cv2.resize(noise2,(1024,1024)) noise2 = noise image = noise2 blur = np.copy(image) sum1 = np.zeros_like(image) thresh = 0.375 i = 1024 num_levels = 19 factor = 3/4 for j in range(num_levels): blur2 = blur ii = ((i+1) // 2) * 2 blur = cv2.GaussianBlur(noise2, (ii-1,ii-1), 0) #convolve(noise2,kernel3,'same') if (j > 0): diff = blur - blur2 diff = shrink(diff, gamma = 25) sum1 = sum1 + (i**0.25)*diff #sum1 = sum1 + diff print (i) i = int(i * factor) sum1 = (sum1 - sum1.min())/ (sum1.max()-sum1.min()) sum1 = sum1 ** 3 # Save on disk index = np.random.randint(0,262144) # Random filename imageio.imsave('temp.'+str(index)+'.png',sum1)
  3. After a nice break and a little thought, I've decided on what to try next for this effort. The idea I have is to assign three states to a signal: 0, 1, and X (undetermined). For most of our audio ITC efforts, we use noise gates, or other threshold techniques (like spectral subtraction). Same here. To get assigned a '0' or '1' will require a certain amount of deviation from the mean. If the deviation is too small, we'll just assign X. So let's say a byte shows up as 11X00001 , rather can call it a failed parity check (it should have even # of 1s), we'll just reject this byte and wait for the next byte. I sort of use the "anomalous value" approach in my ITC "soccer game" program, and it seems to help. The theory is that the signal-to-noise ratio is very low for ITC, at least with our current technologies. Thus, we need a lot of deviation from "normal" or "normality" to be sure we are getting an ITC signal. This doesn't mean that spirits can't send continuous streams of information, it's just that we're not entirely sure of their signal most of the time. Now, the downside of this approach, is it will slow things, as many bytes will get rejected - perhaps up to 255 out of 256! But, we can ameliorate this issue somewhat by increasing our expected bit rate. Currently our bit rate is 1 bit per 1/8 second. But in the future, we can try speeding things up if we are successful with this "slow" test.
  4. If you load in a WAV file of ambient noise at say 192 KHz for about 20 seconds, this will be approximately 3 million samples. Then reshape the array to be a 1024x1024x3 image using numpy.reshape()
  5. I added a little bit of darkening in my algorithm to make it a little more like itc-station's Perlin images. Also, I used my microphone to generate 20 seconds of ambient noise and reshape it into a 1024x1024x3 image. This is the 3rd attempt using ambient noise (192 kHz for 20 seconds): 96 kHz for 40s:
  6. In my new version below, thresholding is controlled by "gamma" within the "shrink" function. Mandatory face pictures Same person with circular black sunglasses? Code version 2 (for record keeping, and others to experiment with): #--------------------------------- # Original author: Michael S. Lee # Date: 8/27/2021 #--------------------------------- import numpy as np import imageio import cv2 def shrink(x, gamma = 30): beta = gamma / (x.std()) y = 1.0 / (1.0 + np.exp(-beta*(x-x.mean()))) return(y) N = 1024 # Image size noise = np.random.rand(N,N,3) # White noise in RGB channels noise2 = np.copy(noise) # setup output array image = noise2 blur = np.copy(image) sum1 = np.zeros_like(image) thresh = 0.375#375 for j,i in enumerate([512,256,128,64,32,16,8,4]): image = cv2.resize(noise2,(i,i),interpolation=cv2.INTER_LANCZOS4) blur2 = blur blur = cv2.resize(image,(N,N)) diff = blur - blur2 diff = shrink(diff, gamma = 10) sum1 = sum1 + (j+1)*(j+1)*diff #sum1 = sum1 + (j+1)*diff min1 = sum1.mean() - 2*sum1.std() max1 = sum1.mean() + 2*sum1.std() sum1 = shrink(sum1, gamma=1) # Save on disk index = np.random.randint(0,262144) # Random filename imageio.imsave('temp.'+str(index)+'.png',sum1)
  7. I was experimenting with Perlin-like (I call it Perloid) noise when I realized I could threshold a little bit harder than usual. The first few images are "regular" sort of Perlin-like, but as you get 5-9, the thresholding is increase for each color channel and resolution channel. It ends producing almost natural-like features. It is it ITC, mathematics in action, or both? Code: #--------------------------------- # Original author: Michael S. Lee # Date: 8/27/2021 #--------------------------------- import numpy as np import matplotlib.pyplot as plt import cv2 N = 1024 # Image size noise = np.random.rand(N,N,3) # White noise in RGB channels noise2 = np.copy(noise) # setup output array image = noise2 blur = np.copy(image) sum1 = np.zeros_like(image) for j,i in enumerate([512,256,128,64,32,16,8,4,2]): image = cv2.resize(noise2,(i,i),interpolation=cv2.INTER_LANCZOS4) blur2 = blur blur = cv2.resize(image,(N,N)) diff = blur - blur2 min1 = diff.mean() - 1.0*diff.std() max1 = diff.mean() + 1.0*diff.std() diff = np.clip((diff - min1)/(max1-min1),0,1) sum1 = sum1 + (j+1)*diff min1 = sum1.mean() - 2.0*sum1.std() max1 = sum1.mean() + 2.0*sum1.std() sum1 = np.clip((sum1 - min1) / (max1-min1),0,1) plt.imshow(sum1) plt.show()
  8. Lance, Typically, Ive had to write specific software in Python to do this idea, but there may be a "plugin" solution. The idea would be to 50/50 add your favorite white noise source to a fast sweeping tone. Then, apply a noise gate. Followed by a short reverb around ~60 ms. This would get us to the point of step 1: musical tones. A second noise gate could then be applied to isolate several tones at once vs. single tones, which would be Step 2. Now instead of my step 3. A suggestion would be to multiply the gated tones by a 120 Hz sawtooth waveform, which would yield a glottal pulse (vowel sound) with formants. An alternative would be to write some software to implement my method. We'll get there, if indeed, this method is worth pursuing.
  9. I'll also add a compilation of good clips from a particular day of experimentation (July 17, 2021) using this method: clips_from_07_17_2021.wav See what you can hear. I'll annotate it later.
  10. Here are some samples to explain this idea, not specifically to demonstrate any messages, but you may hear some anyway . I've recorded each clip in succession, as I turn on each function. Step 1: Tones only software_vocoder_tones.wav Step 2: Tones with a noise gate, to provide vocal cadence, and remove spurious tones sv_tones_noise_gate.wav Step 3: Tones decoded by "De-toning" ML model (notice how the voices sound "ducky") sv_detone_only.wav Step 4: De-quantization model added to reduce "duckiness." This converts 3-bit voice to 16-bit. sv_detone_and_dequant.wav Step 5: Add a single semitone pitch shift up. sv_detone_dequant_shift1.wav
  11. The tone vocoder was one of the earliest methods developed to transmit voice digitally through low-bandwidth networks. In the original tone vocoder method, an input voice was transformed into time-frequency space, where at each time interval the sound was decomposed into a series of frequency bands. The amplitudes of the bands were transmitted over wire or radio. On the receiver side, the amplitudes were used to reconstitute the original audio. The quality of the tone vocoder was never that good, and modern methods such as linear predictive coding (LPC) have superseded it. However, for spirit transmission, the simplicity of the tone vocoder is useful. In modern times, the term "vocoder" is used either to 1) describe methods for using vocals to modulate instruments in music using the aforementioned tone bank approach, or 2) converting audio to a time-frequency space to perform operations such as changes in pitch and speed (aka, a phase vocoder). Why would a tone vocoder be good for spirit communication? One hypothesis is that spirit is only able to add short impulses of energy into our devices. No matter how hard they try, a bunch of pulses strung together doesn't sound very voice-like. What if their impulses could be interpreted as musical notes that play as short duration tones? We set up a communication system with spirit where in a given 16,384 sample block (1.024 seconds at a 16 kHz sampling rate), there are 32 time intervals and up to 64 tones that can be activated in each of these intervals. The tone detection of the spirit signal we (spirit and physical experimenters) agree upon is a pulse-position-modulation (PPM) approach. Within a 32 ms time interval, there are 64 sub-intervals. If the amplitude / energy of the signal (or the inverse amplitude for null detection) within a sub-interval is above a threshold (peak detection), the respective tone is activated during that interval. We found it useful to increase the duration of each activated tone by 2-3x ( to about 60 ms, which is the typical duration of a medium-length vocal phoneme). The frequencies of interest for synthesis of human speech are between 75 Hz and 4kHz. Therefore, the agreed 64 tones can be linearly spaced within this range. One can also choose a non-linear spacing (like quadratic or exponential/musical). For the linear spacing, the 64 tones can range say from 168 Hz up to 4200 Hz, with equal spacing of 64 Hz. My spirit team has learned how to activate this vocoder to produce voice. I suspect that any future researchers will have similar success with heaven-level spirit teams. Spirits use the term "mirror" to describe the device that converts their voice to tone index conversion (likely a Fourier transform). They use the term "elevator" to describe the fact that over a 32 ms interval, hitting at the right moment activates a particular tone, starting from low frequency (168 Hz) to high (4.2 kHz). They also use the term "modem," because, indeed, this is a digital-like transmission protocol. I should point out here, too, that, in theory, my configuration is not too different than when researchers use a sweeping frequency. With a sweep, the spirits can "push" the sound at certain times corresponding to different frequencies. Of course, effects like noise gating (to hide original tone) and reverb (to extend the "escaped" tone) are needed. If spirits could perfectly activate tones in this system to produce voice, it might actually sound fairly legible. However, even perfect musical speech isn't all that easy to understand. Therefore, I've developed machine learning models to convert the musical speech to real-sounding speech. The way I do this is I convert real speech phrases into tones, and then train an ML model to reverse those tones back to the original speech. In my next post, I will share some results...
  12. I think Keith and my teams are working together. There's a fair bit of similarity in our methods and concepts. Keith clued me into Tesla, and now I'm mentally working with him, or at least think I am
  13. Here's a few more results, to show you the range of what can happen. Also, these are all my unique designs that are pretty much extensions of the 2020 paper to 3, 4, and 8 "nodes." The 47 gates comes from the fact that the aggregate delay is around 20 ns, which is around the period of a 50 MHz clock cycle. It also should be pointed out that randomness at 50 MHz may not be required for ITC. It may be sufficient that the systems are "chaotic" or that they switch back and forth randomly between nearly periodic states.
  14. Here are results for two noise sources. The first one has pretty solid randomness, but it is biased. https://res.mdpi.com/d_attachment/symmetry/symmetry-12-00506/article_deploy/symmetry-12-00506-v2.pdf The second has much less bias, but some correlation bits at the highest frequencies. (https://www.hindawi.com/journals/ijrc/2009/501672/) Most of the noise sources I've coded up sound and look like white noise when you reach the audible range, so we are really honing on non-randomness at the 50 MHz sampling rate. The idea is that we need the higher sampling rate to get more information from spirit.
  15. I will use this thread to share with everyone the raw output of my different random noise / bit sources from the FPGA. A lot of these designs are already in the scientific literature. Only a few are truly my own creations, built on the ideas of previous ones. Each one has a rate of 50 Mbits per second, because this is the base clock speed of the device. The system clock can be sped up to about 200 MHz without overheating, but for now 50 MHz should be more than enough. Also, you can run many of the sources in parallel. For example, you could have 20 sources running on one 50 MHz FPGA to get a total of 1 Gbits per second. As you'll see, none of my sources are perfectly random at 50 MHz. Although, if you set the bar lower to say 1 Mbit output, many should pass randomness tests. I believe that one or more of these sources, and perhaps new variants yet undiscovered can hold an ITC signal. But that won't be the direct purpose of this thread. One of the most basic things people look for in random bit streams is called "bias." This is usually measured as the number of 1's divided by the total number of bits. 0.5 or 50% is the desired theoretical value. But I think this is too overemphasized in the literature, because if the bias = 0.5 isn't met, people will revert to "whitening" techniques to make the bit stream look more random. Now if your goal is generate cryptographic keys to store your cryptocurrency, whitening may be fine. But if you're trying to "hear" spirits, etc, whitening might end ruining the weak signal we're trying to pick up. Thus I want to introduce a new metric, I just thought up (special thanks to spirit team , bias variance. If a bias is 0.75 for a particular noise source, that should be fine, as long as it doesn't keep changing over time. We can always subtract the mean, if we're doing things like summing up the bits over an interval (remember the 6,125,000 samples per group bit?)
  16. Andres, right! I'm asking them to do some simple math. No language encoding necessary.
  17. So, I would say my experiment to allow spirits to preserve byte parity failed. I still think it is a good test, and now is arguably an experiment I will return to as I think of different noise sources to program on the FPGA. I agree it's possible that spirits have trouble synchronizing with our time systems. I also think it's possible they can't see the "1's" and "0's" going through my chip. It's also possible they just aren't interested in this particular experiment. Another possible problem is the algorithm I'm using to extract the bits: finding the mean, and then assigning bits by being above or below that mean. Perhaps, a bit should only be assigned when it is several standard deviations away from the mean. Otherwise, it might be assumed that no signal was sent. A positive outcome from this experiment is I feel like I now have a convenient objective metric to evaluate noise sources. When they produce 8-bits, they should be able to ensure "parity", aka an even number of 1s and 0s. It's a relatively simple agreement between experimenters Here and There. I'm not asking for a particular language or response, simply a consistency check. Edit #1: I added an example of what the FPGA sounds like and looks like as a spectrogram when it's making the bits. There's a low tone for a 0, high tone for a 1, and silence for a rejected byte. It has nothing to do with ITC, but I had to make this so I could communicate the FPGA's group bits with my PC. It's surprisingly difficult to get an FPGA to talk to a PC. I have a much better idea (logic analyzer) if I need it in the future, but this was a fast hack. example_sound.mp3
  18. They can hear the tones being generated. High frequency for 1, low frequency for 0, and silence if the symbol doesn't pass the parity test. I have it running, now. Unmonitored. But recording. They've been playing with my FPGA noise sources for months, and getting decent results with voice. So I imagine they know what's going on. Still, the limits of physics may prevent them from having sufficient control of the noise sources.
  19. Ok. For viewers tuned in at home. Here's the first result: For all symbols collected, ~55% parity correct. 50% is expected by chance. So it's something, but it's also bad enough that I'm not even ready to read the symbols. Maybe I'll be impressed with 70%? I have several noise sources to try. Maybe one of them will make the cut? Stay tuned. Edit 1: Some other noise sources I've tried are at 47% correct parity. 55% is the max I've seen so far. I think this variation could easily chalked up to the algorithm I'm using to extract the bits, specifically how quickly I'm updating the mean , which is the dividing line between whether to assign a 0 or 1. Therefore, I would like to see at least 60% or greater to know we're going somewhere.
  20. Parity serves two purposes: reject symbols that have a bit error and measure the quality of the ITC signal. It's not meant to match any standards. Yes, it is possible to have parallel bit streams. The sky is the limit. However, for now I'll be happy with telegraph speeds from the 1800s.
  21. I'm currently trying digital ITC. I move around quickly from idea to idea like a road runner. ("meet meet!") The general idea is to receive 8 bits per second, with one parity bit. My field programmable gate array (FPGA) noise source yields 50 million random bits per second. Lets call the slow bits "group bits." 8 group bits per second. 8 group bits per "symbol". I decide on each group bit by summing 6,250,000 random bits. If its above the moving average, I assign a '1' , otherwise I assign a "0". Parity means that the number of 1s per second/symbol should be even. If it isnt, then that's a bad symbol. Goal would be to have no bad symbols. Symbols can be translated by 7-bit ASCII character set. Or we could reduce to a 5-bit letters only set. Either way, the first, very simple question is how good can spirits maintain parity? Random chance says there would be 50% symbols rejected.
  22. Wagner: Very nice. There seems to be at least three phases. One is the "sssss." Another sounds like a periodic tone around 100 Hz. The last is a transitional type sound. The last two work together to produce vowels and short consonants. Is the apparatus completely stationary? If so, that's interesting that it changes sounds like that.
  23. No. I was starting from the start. This was referring to my phonetic typewriter. A pre-recorded set of scrambled phonemes played weakly through a USB audio interface. The additive noise triggers a software noise gate to allow spirits' selection of phonemes to pass through.
  24. A physical analog of what you're describing is a line of dominos. The first domino is only a little bit stable, but when it falls by a light touch, it starts a cascade of motion / energy release.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.