Home > Advancements > Statistical Significance Research

Statistical Significance Research

The most important discovery in Bible Code is the mean probability, because this value allows us to determine the statistical significance of our discoveries. In this article we provide a brief overview of how we found the mean p-values and what we learned along the way.

A Brief Overview of Our Research

Using the Smith’s Bible Dictionary, we were able to compile a list of over 3000 keywords that were all between 3 and 8 characters in length. We then built specialized software that would allow us to automatically search for each keyword between an ELS range of 1 and 1500 and to calculate and record the p-value of each discovery, one book at a time.

This process ran 24 hours a day for 35 days straight and we were able to gather over 59000 samples. We then divided our samples into groups. We determined as many as 6 groups per book in most cases, because each book may consist of an array of groups and each group is determined by the character length of each sample we collected.

We then calculated the mean (average) value for each group by adding the p-vales together and dividing by the group’s total number of samples. We then calculated the significance level using the conventional .05 and dividing by the total number of samples per group.

We are now able to test for and determine statistical significance in Bible Code. Because, we have set our significance level at 5%, this means we now have only a 5% chance of producing inaccurate results, therefore not only have we achieved the ability to show evidence for true statistical significance in Bible Code, but we now have a 95% chance of being right.

Automatic Fine-Tuning Design

Our online software is responsible for fine-tuning the entire system. From the viewpoint of the system, this is an automatic process. When the world uses our online software, each result is saved to our database where they can be accessed and placed in a group.

As we continuously add more and more samples for testing, our mean values and significance levels will become more and more precise and our tests will become more accurate. Thanks to the support from thousands of people from all around the world, we have just recently added over 2000 additional samples and we are continuing to receive more each day.

Current Precision and N/A Results

Despite having gathered over 59,000 samples, we currently do not have a sufficient amount of samples to satisfy the requirements of our statistical significance test for all groups in each book. This means that, in some cases, our significance ratings may not be accurate. As a side effect you may notice a HIGH to MID rating for a keyword thought to be random.

Moreover, the same could happen in reverse, where we may rate a keyword as LOW but as our precision increases we may find that this keyword is actually MID to HIGH. So, never be too quick to accept or reject any result based on its significance level.

Furthermore, in rare cases, we may not be able to perform a significance test at all, because there were certain books where we could not locate any samples for certain groups. This was typical of keywords of 8 characters in length. So, by chance that you ever find yourself in this instance, our software will provide a significance rating of N/A to signify the event.

A Brief Overview of Our Analysis

When our initial results were compiled, we built a series of graphs based on the total number of samples in each group and when these graphs were compared, side-by-side, a pattern of normal distribution is revealed to exist throughout the entire series.

At first, these results were surprising to us, we did not expect to find any similarities in terms of what we would discover, especially a normal distribution pattern. We kept asking, ‘How is it possible for the same list of >3000 words to produce virtually the same frequency pattern in every book we searched?’ After all, we are not talking about verbatim text.

So, to find out what was causing this to happen, we processed our word list and all 39 books of the English TaNaK (JPS) to find out how many words were possible for each group and how many of these words were actually found, and it became clear why this happened.

In order to understand this concept, we must first understand what we mean when we refer to our “discoveries”. What we are referring to is a hidden message that was encoded using an equidistant letter sequence (ELS), which does not produce verbatim text.

Therefore, probabilities play a major role in what we discover.

It all goes back to what we have already known all along, which is that some words can and some words cannot produce statistical significance in Bible Code. However, we could never be sure where statistical significance actually began, until now. By graphing the mean values we were able to see the exact point where statistical significance universally begins.

Our results showed overwhelming evidence that statistical significance begins with keywords that are 7 characters in length and that significance grows rapidly beyond this point.

DIVINECODERS

See also: DivineCoders Means Research Documented Analysis

Advertisements
Categories: Advancements
  1. Samuel
    June 1, 2014 at 5:55 am

    Please which bible version is used in this program

    • June 1, 2014 at 11:33 am

      Hi Samuel,

      We use the Jewish Publication Society (JPS) bible, English translation.

      DIVINECODERS

  2. George
    September 30, 2013 at 1:52 pm

    Certainly an interesting criteria. What I don’t understand is why the 1500 letter skip limit, and why 5%? Did you do any analysis based on specific letter frequency? Example, some letters appear more often than others in the dictionary; E is 12.51% of all the letters of all the words in the dictionary, while Z is only .09%. Is it reasonable to expect words with high E content to appear far more often then words with Z content, without any significance to the specific words involved? In the Bible Code instance, would doing a letter frequency analysis of the 59,000 samples collected to date help adjust the probability values more accurately?
    Just asking.
    Cheers

    • September 30, 2013 at 5:51 pm

      Hi George,

      Good questions! Thanks for asking.

      By limiting our search to 1500 ELS, we are essentially forcing ourselves into a very tight and limited block of text where significance is HIGH, because the possibilities are fewer. In addition, we are searching for a hidden message, thus suggesting that we should find tight clusters of encoded words, therefore we limit the ELS to ensure this happens.

      A 5% significance level is standard in t-testing.

      Yes, we analyzed the frequency of both the books of the Old Testament and the terms that we found which helped us to understand where significance universally begins.

      See: https://divinecoders.wordpress.com/2012/01/16/divinecoders-means-research-documented-analysis/

      No, what will make our tests more accurate is having more samples.

      DIVINECODERS

  3. kakalaka
    July 1, 2013 at 10:53 am

    Nice O_o you guys are awesome! I appreciate everything you’re doing with this Bible code. You’re in effect proving that God has always been our creator and that he has had a plan for each and every one of us all along. Keep up the good work mates!

    • July 1, 2013 at 7:27 pm

      Thanks for reading and for the kind words!

      DIVINECODERS

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s