The author wanted to make his work of generating ebook texts more efficient. This book is about the system the author composed in order to make ebook text generation more efficient:
1. When generating a single text;
2. When generating subsequent texts;
3. To generate new texts that previously would have been not considered.
1. Collecting ideas for the text through online research, direct book research, original thought, and drawing together a series of arguments and a conclusion;
2. Forming these ideas or thoughts into a logical progression;
3. Developing an overall structure for the text –chapter headings, subheadings, and contents of texts under each, usually in an arrangement of no more than three to four levels of headings (1., 1.,1 1.1.1, being three levels of headings);
4. Filling in each section under each heading with the ideas for that section;
5. Writing out in full the ideas under each heading.
Inefficient activity was identified in the following areas:
• Brainstorming, mind mapping and/or collection of references for locations across the Internet was done in one or two programs, such as Ygnius which involved typing
ideas into a map. This text was not available for electronic copying or use in the word processor. Thus, even though ideas were recorded, those ideas of thoughts were all re-typed at a later date.
• Often after all material was written, the progression of ideas did not seem to be effective. Thus, the material was re-organized often with a lot of reformatting needed to cater for the changes of emphasis of the ideas or thoughts.
• Through the writing process, with changes in format, alterations in the styles used, and in chopping and changing text, a large number of styles were created in word, most of which could be eliminated. Through eliminating these, re-formatting would have to be re-done, or through not eliminating these, the conversion to a .PDF format would stall and cause difficulties.
When generating subsequent texts that were either updates of the first ebook, or that were another ebook based on the first, there was always text that could be re-used. In efficient activities involved in building these subsequent books included:
• Editing the original book to delete the text not now required;
• Re-formatting the new book to reflect the new logical progression;
• Eliminating the styles not now required and obtaining a good clean document for production of the .PDF document.
Generating new texts out of the current texts available often did not occur to the author simply because there was not way to make connections between the various discrete books that were produced.
This book describes a system that was built to erase these inefficient activities, and how that now forms a system through which the author generates multiple texts using this system, effectively databasing the entire production of text he produces to form reusable text objects that can be effectively and efficiently inserted into any text in production. This forms a rapid text development system that can be used to generate new texts of more than 50 to 100 pages in length in less than three days work.
The whole premise of the concept of ‘System’ assumes that there will never be a single program that can be adopted by the composer to achieve the composer’s end goals. Why is it not possible to find a single good program that can get the job done as required by the composer?
There are several good reasons why it will never be possible to build a single program that does everything. First we must consider the nature of the problem we are solving; the way I think about the activity I am now embarking on may be quite different to the way you may think about the problem. Different thinking styles require different program activities. For example, you may prefer a Mind Mapping program that holds all thoughts in the on-screen picture. Personally, I do not really care about what other thoughts there may be, as long as I can see the relevant and immediate thoughts. The programmers of MS Word never envisioned anything remotely like PersonalBrain being a part of ‘the writing process’; their style of thinking about how to write is a different concept to the one I am working with and elaborating. The concept of re-usable text chunks never entered their thinking as their concept has been based on ‘original’ thought as a writing method; so to them an outliner is an appropriate method of textual development. So our different styles of thinking about writing cause there to be different pre-set elements within a program.
A program is a program development production team ’s best guess at what people will need when they embark on a writing project. They have studied a wide range of writing contexts and have made the program as general as possible with as wider range of tools as possible for those people who will embark upon writing. Generally, MS Word is built for an office where business texts are generated. Simply, their concerns are not those of mine in generating text, and I do not expect them to have the exact concerns. There is a complete meta-language of electronic meaning making here, and I do not expect one team to embed the whole language in one package.
Second, due to the nature of programming, it is better that the programming team focuses on one major activity of a program and gets that right, rather than trying to solve all problems in the one program. There is a complex enough job for a programming team to get one major thing right let alone trying to solve all electronic language problems. For example, databasing text chunks has been a quest of this author for ten years. Personally, I have devised fifteen different ways of doing this, even to the point of having my own programming team develop a concept for me. It took another person somewhere else who has invested millions of dollars in a paradigm or way of thinking to actually come up with the solution. Solving problems in electronic language is a huge investment. So with TheBrain solving this problem, I do not want them to get distracted with say, trying to develop a book formatting system. It is better that they focus their investment on improving the PersonalBrain program.
Third, for people to solve problems in electronic language, it is better that there are thousands of programs that have interchangeable data that can empower people like me to solve problems of my own in electronic language. A program is a set of instructions to which I can add my own instructions to get jobs done. I do not want, and it is not a viable proposition to see that a programming team puts every instruction into a program needed for a particular purpose. This means someone else is doing my thinking for me.
Each composer should have some good pre-built instructions to get things done. And then, a composer needs to have the flexibility of doing things, so that she/he can make meaning in ways that are easy to think about for her/his thinking style, composing style, writing style and for the particular job being achieved.
In the regime of writing that we have today, there is a political prohibition that causes us to think that the largest textual element we can use of other writing outside of the current text I am writing is the word or phrase. We have databases of words; these are called dictionaries. We even have databases of phrases that when I start writing a phrase, the computer can suggest a completion of that to save me from having to type every key.
To think that there are sentences and even sections of a chapter that could be re-used is unthinkable, and is not practiced by most writers or authors. Each sentence and each paragraph, as well as any other larger grouping of words, belongs to a particular text. Therefore, it is not a concept that most people would use. There are of course political reasons why this is the case. There are copyright laws and laws of ownership that we have in out society that precludes use from this type of activity.
In the world of information delivery, though, it makes sense, that if someone has invented the perfect explanation for some little idea that is in the public domain, and I wish to invoke that explanation, why cannot I obtain that explanation from a database and included it in my current text? There are quotation and ascription ways of handling such that are used in academic circles. And this is one viable way of working on such.
However, when it is my own writing and I own the copyright, why cannot I database everything I have written and then if in this current text if I have written the text before in a paragraph or a chapter section, why I cannot pull from that text and simply copy it into this text? There is nothing stopping me from doing this except the problems of locating the text, and copying and pasting that text into the current format.
Simply put, a good writer can produce text at the rate of ten finished pages a day when ‘think-producing’ text. That is more than one page an hour, or about 600 words to 700 words of finished production an hour. There are some questions of efficiency here that must be answered for this text chunk sharing to be viable.
If it takes me longer to find the text than to actually produce it again, then the task of finding and simply copying that text into the current document is not viable. However, if there is an efficient way for me to find that text, and it takes just a fraction of time to locate it again and simply add it into the text, then, it is worth the task of locating it again.
This is where PersonalBrain comes in. If all of my text that I have ever written is stored in a system where thoughts are linked to each text chunk, and that I can either search all chunks, or search all thoughts, or I can simply step through from one thought to the thinking process that generated that text in the first place, I can then locate my text chunks efficiently. Second, if all text chunks are formatted in a similar manner to all my other text chunks, then it is a simple matter of including that file containing that text chunk in my current text.
Using the text chunk idea, and using PersonalBrain to database them, I can search to find out whether I have ever written a text chunk about an idea, locate that text chunk, read and assess its applicability for that location and include that text chunk in my current text in about 3 minutes. On average, my text chunks tend to be about 500 words in length and that would take me approximately 45 minutes to complete in a normal writing day. So, if I have written about it before, and that explanation or information delivery is useful in this text, it takes me 3 minutes to insert that text in my current document; this is a saving of at least 40 minutes for this current document.
This means to say that if I have written chapter sections before and that I am simply calling on previous chapter sections, I can build a text of 100,000 words in let’s say 10 hours –a little over a day’s work. If this was built from absolute beginning at ten pages a day, it would take me 20 days to produce this text.
Now the reality is that there is no text that I can ever build that is completely a text built from previously written paragraphs. There are, however, texts that I can build that are quite different texts using 70% of pre-written material and 30% brand new material. I have experienced this on many, many occasions. I have even produced a text that it 80% previously written material constituted in a different logical arrangement with 20% from previous texts. Even if it was simply just 50% of the current writing assignment that is based on pre-written text chunks, it is still a massive saving.