Spawn of Ten Thousand Websites


Script based site family architecting
for the World Wide Web






William Perilli


Spawn of Ten Thousand Websites


Script based site family architecting for the World Wide Web


Author: William Perilli



Amillia Publishing Company

Copyright 2004, 2005

Natick, Massachusetts



This is currently a company confidential document. Not for distribution or reproduction.

Copyright Amillia Publishing, 2004,2005

by William Perilli


Table of Contents

Spawn of Ten Thousand Websites 1

Script based site family architecting 1

for the World Wide Web 1

Amillia Publishing Company In-house Confidential Document. 6

Copyright 2004, 2005, William Perilli, Amillia Publishing Company. 6

Project Form Input (PFI) 6

Review 1 December, 2004 9

Parsing the hidden variable 11

Results 12

Pull Down Lists 12

Handling the POST and the GET 13

Previous and Next Buttons 14

A Little bit of Computer History 14

Adding Slideshow Behavior 16

Screen Shot 12_01_2004 16

Better behavior for gallery style and thumbnail style changes 16

Theory of thumbnails 17

Commenting on Pictures 17

Rating Pictures 18

Putting pictures into categories 18

Creating Categories 18

Cropping Pictures 18

Rotating Pictures. 18

masking Pictures 18

Draw 18

Layer 19

Tiling 19

Storing of Web Created Mosaics 19

Web Creation Scripting Language 19

2 December, 2004 20

Saved Content 22

Mapperilli 22

December 6, 2004 24

Implementing Tables 24

Adding Cells 25

Images 26

Efficiency Discussion 26

A Style Class: Obvious Things Become Clear 28

Name Value Pairs 29

Content 31

Tables and Content 33

Process of System Design in the age of Software Packages 37

Link array. 40

WebArticle 40

Article Publishing/Authoring Tools 41

Moving Forward with Design 41

Purpose of Modules 41

Site Map Classes 42

Magic Class 42

Required Data Classes. 42

Page Navigation Manager 43

Creating Audio CD's 65




Amillia Publishing Company In-house Confidential Document.

This document is the private work of William Perilli for Amillia Publishing Company and is classified as a need to know document.

Copyright 2004, 2005, William Perilli, Amillia Publishing Company.



Project Form Input (PFI)


In order to have any kind of a useful web page I will need to provide the ability to create form input. Below is a list of a current very preliminary input form on the gallery web page:



// form inputs

// create a form input.

// need a list of form input items

// for each input item we need to know the data about that item.

// then for each item we need to output the html that will

// provide the form for the user.

// We can also provide ordering and layout information as

// a configurable. We see that this providing

// an extensible, configurable, generic data input form

// generator for html forms is not trivial.

// But it is also not complex.

// set option selected

// provide further options



function provide_form_input()

{

global $selected;// _region;

//global $selected_date;

add_center_start();

echo <<< NO_MORE

<form action="$_SERVER[PHP_SELF]" method="POST">

<P><STRONG>Please Select a Gallery: </STRONG>

<SELECT NAME = "region">

<OPTION SELECTED>

NO_MORE;


print ($selected['region']) ;


echo <<< NO_MORE

<OPTION>minuteman

<OPTION>flowers

<OPTION>flowers2

<OPTION>flowers3

<OPTION>johnweb1

</SELECT></P>

<P><STRONG>Today is: </STRONG>

<SELECT NAME = "date">

NO_MORE;

$date = date("D, F j, Y",$selected['date']);

//

print "<option SELECETED value=\"";

print $selected['date'];

print "\">$date</option>\n";


//print($selected_date);




list($hour, $minute, $second,$month, $day, $year) =

split(':',date('h:i:s:m:j:Y'));

// print out a week's worth

for ($i=0; $i < 8; ++$i)

{

$timestamp = mktime();

$timestamp = mktime($hour,$minute, $second,$month,$day + $i - 7, $year);

$date = date("D, F j, Y",$timestamp);

//

print "<option value=\"$timestamp\">$date</option>\n";

}

echo <<< END

</SELECT></P>


Enter a Picture Selection

<input type="text" name="picture_number" value =

END;


print ($selected['picture_number']);



print (">");

//echo <<< NEWEND

print ('<input type="hidden" name="stage" value="process">');

print ('<input type="submit" value="Get Gallery">');

print ('</form>');

//NEWEND;

$next_picture = $selected['picture_number'] + 1;

$previous_picture = $selected['picture_number'] - 1;

// the previous button


print ( " <A HREF='index.php?region=");

print ($selected['region']);

print ("&picture_number=");

print ($previous_picture) ;

print("'>");

print ("Previous</A>");


// End of the previous button



// the next button

print ( " <A HREF='index.php?region=");

print ($selected['region']);

print ("&picture_number=");

print ($next_picture) ;

print("'>");

print ("Next </A>");


// end of the next button

add_center_end();

}


From what we see above we can very quickly understand a better structure that will allow for a more useful and reconfigurable interface.


The advantage of just using html that is as close to what hte user will see is that the creation of the forms is obviated.

And for this reason it is not a bad idea to do things as shown above. However, the php was a little bit quirky. The echo commands didn't work that well. In the body of a php function it seems like a more reliable interface to write out the html with print statements.

But using the echo command gives output that is obvious. In html which is under the covers it doesn't really matter if anyone thinks your output is pretty as long as you have a pretty page.

A pretty page! Ha! I so often find that there is some kernel of truth that I am trying to express about some subject. And then I make a statement that is meant to not be an edict but sounds like one when I am done saying it. Thus I must say that the way an html file behaves is very important. I have crafted very simple files that have made my browser crawl to a screeching halt. I am guessing that I documented that elsewhere. But it had to do with trying to open up 150 or so 3.2 meg jpegs all at once at a 300x400 size. The browser tried to do it but just couldn't.


I know that the browser will handle showing 150 or so 300x400 images as long as the images are already scaled. And so one might say that a file that would crash the browser so hard is 'ugly' html. And yet the formating of that file was so exact as I used a simple block copy and paste and then interative numbering. The lines all were the same except for the numbers of the jpg files. So the formating was all pretty, but the result was a disaster, we could term it ugly.


And so don't make files that crash the browser. You'll know things aren't working right away when you do this.


The above code is a hard thing in that it will create three input fields with associated commenting. The post from page will be formated in an exact way determined by the code that is there. And so to change it we have to 'think' in html.

It would be best if we could reduce the creation of these forms to the most basic concepts. I am sure that this problem has been solved many times before. And html solves it pretty well too.


Basically when the data comes from a get or a post command from a browser or from a wget typed at a shell prompt, the data is formated in a particular way, delimited with '&' or ';', and all kinds of wrapping gets done, and formating things with styles.


I would like to have an input button that will let the user add a comment to the picture.

Then I would have to add a part where the user could read his comment. All of this has been done by other people as well, hasn't it?

And I need a place where the seller adds his price. And maybe he wants some fancy functinality like the ability to set a sale or discounts of n%. All of that has also been done a million times before by other people too.


All of that can be built on the PFI.


Review 1 December, 2004


Design tasks like this take longer than people plan. Usually when I start to look at these things I decide quick enough that too much work is just too much work. And so I get stymied by the amount of work that is needed. Breaking things down into tasks is a good thing to do. And one must not code continually but let air out the problems and make things happen.

In the case of the forms project I think that a complete 'solution' to the problem is just called html itself. There are so many issues about how to spawn a page that to do it effciently becomes a driving force. But then again just getting things done is also important.



Currently I wonder if the over head of making complex classes to do all of the things that I need to do is worth it when I can make so easily a file in php that will be a mirror of what I need. In other words the xml needed to define the html that is the form on the page is just about the most efficient way to encode the data that I need. Why? Because it is already almost just in the form that the browser needs to see. But the speed of computers is great, so does this matter? Obviously if I am serving up thousands of pages a second it does. I need to worry about this but not right away.

As the raw code of php as shown in pages above is very efficient for php, there is still the need to load this. It seems to me that I need to make somehting work. This means a lot of time making html control classes. And perhaps these classes will also output a more efficient form of script if needed. I can worry about that later.



Currently I have created a parent class called HTMLControl. All of the controls are children of this. There is one for each of the controls talked about in the specification. The group button control is a special case in that it has multiple instances of it.

The pull down list also needs a special situation to be able to add more items to the list.



I am going to start with the controls that I am using and see what issues come up. I figure that the controls that I am already using are a good place to start. I will work at replacing the ones that I have with ones that are created programmatically.





Hidden Control, a case study for the new idiom



How much simpler could it be to make a hidden control than the following lines from our php script:


print ('<input type="hidden" name="stage" value="process">');


In the case above we are hardcoding all of the html into the php print statement. This is only reconfigurable by editing the php script where the information is provided. There will thus be a hidden variable stage that goes along with a post request. It will have the value of "process".



The string needed to provide this is so simple that it would make sense to have this as a static element that persists accross calls to this script. that way it would not need to be created every time. However, that would only be necessary if their was a need for this. I can't see a more elegant way to do this. That would be if I wanted the data to never change.

In order to provide this as part of the output of a class we suddenly have the overhead of having a class. And this might be expensive in terms of memory and processor cycles. But let's not worry about this. Because maybe we will use our class that is expensive to build another software creation that will be efficient. IE: This class might morph into the creator of new software. It may become the tool from which other tool are built. Or maybe it will work just the way that it is created and we will use it as we make it and walk away unless we need more efficiency.

Parsing the hidden variable

The script that generates the data that is for the html is also the one that handles the post from that very page by the user. And so part of any design is the other side of it, parsing the data into a useable format. Fortunately PHP does this for us. If the data is there it will show up in the script in the variable array $_POST. A simple print_r will output that to the html that is served up by the script. We can use this little feature to take a look at the data that the script will have if it gets it from a POST. Here is output pasted from a page that was created by the php script:


Array ( [region] => flowers [date] => 1101909820 [picture_number] => 33 [stage] => process )

here are print_r statements for various arrays:

//print_r($selected);

print_r($_POST);

print_r($_GET);

//print_r($_SERVER);


For a non-debug version comment these out.

Notice that there is no data in the $_GET array. Why? Because the page was posted and not gotten. And so we see that this jocking between which type of data needs to be considered. There will be one or the other. If a page is gotten from a GET, ie from a link with a string attached, then the data will be in the $_GET array. If from a post then in the $_POST array. So I created a $selected array into which I copy the data depending on which type of request I get.

The data is a set of name value pairs. Thus there needs to somehow be a list of these. In the case of the scripts that we have the html maintains this data and the php picks it up. The data is hard coded. This produces a decent interface. But it means that adding data to the list isn't that simple. I want to make it simpler.

Suppose we have a simple constructor like this:

function HiddenControl($name,$validity_parameter,$is_valid,

$is_required,$default_value,$input_prompt,$error_message,

$enabled);

We would call it like this:

$hidden_control_1 = new HiddenControl

("hidden_ctrl_1",,"false","false","ctrl_1_value",

,,"true");

We can then create this control by calling this function. And then when we want to output it to the html then we call a show function.

hidden_control_1->show();

This would obviously have to be wrapped inside of the form.

Note that above I have discussed how to do a simple design. What I come up with, what the prototypes really are, will be different. Because the coding of this is trivial and the real work is in designing the objects and calls and how the data is stored. The data here is the king and is the reason for us wanting to do this, so we can get data from our page and also request data from our users. And so when I am done maybe I should rewrite this document to reflect what I create. But that would be like a cheat and not show the reader the real process of creation. What you read here is now a preliminary, written before code is cut. I have devided these up in my head and will now implement the few functions involved. I already stubbed out the classes.

I will need to make the constructor and the show function.

Results

That was surprisingly easy to do, taking less than 15 minutes. I rearrainged some of the parameters. I also noticed how the page was cached and didn't update as expected. I have heard that it is often good to also send a random number so that the page request will always be different. I suppose that I should do this as a first hidden item that I send. I will also need to do this with GET url strings as well. These are output as links from the various images or text with hyperlinks.

Next I will implement the pull down list of items.

Pull Down Lists

The pull down lists are not that hard to do either. There is a need to express what the values are to start with. And for this reason the item is slightly more complex. The value that is passed needs to be preserved. I do this currently by having array $selected which associates the control name with it's value. Currently this array is global to the script. It might be better to have this encapsulated into a class as well.

The function that makes the list of links for all of the pictures that is in the file link_layouts.php also needs to know all of the parameters so that it can make up the links with a proper GET string. At the place where the information is either recieved as a POST or as a GET, at the start of the script, the system needs to know what the various values are.

Handling the POST and the GET

Here is a little piece of code to determine if we get a POST or a GET:

if ($_SERVER['REQUEST_METHOD'] == POST)

{

// print ("<p>we have a post</p>");

print_r($_POST);

$selected['picture_number'] = $_POST['picture_number'];

$selected['gallery_style'] = $_POST['gallery_style'];


}

else if($_SERVER['REQUEST_METHOD'] == GET)

{

// print ("<p>we have a get</p>");

print_r($_GET);

$selected['region'] = $_GET['region'];

$selected['gallery_style'] = $_GET['gallery_style'];

$selected['picture_number'] = $_GET['picture_number'];

}


This is followed by a piece of code that sets the selected values if they are not valid as shown here:


// if we didn't get certain values, then set them here.

if (is_null($selected['region']))

{

$selected['region'] = "flowers";

}

if (is_null($selected['picture_number']))

{

$selected['picture_number']= "10";

}

if (is_null($selected['gallery_style']))

{

$selected['gallery_style'] = "Windy";

}


Notice that within the body of the if and the else if we set values as determined by what was passed to the server. And so we see that at this time it would help us if we had already built all of the information for the form and had access to it. And that way we would just loop through our various elements and set them as needed. That part of the functionality can be handled by yet another array wrapped in yet another class that will hold all of that functionality.

Previous and Next Buttons

These buttons also need to have the gallery_style embedded into them. I will modify them so that they do as they don't have it now. It is telling to note that my writing about the fact that I am going to modify these buttons probably takes more time than just doing the modifications in the first place for me. As this is the case I find that I can not document every little thing that I do except that you can go to the files themselves and look at what I have done.

I have added these hooks now, and I can already see where I can make things more efficient with a different design. I will let these new designs incubate before I use them so that I can think about this some more. As far as the rest of the forms interface goes I need to keep doing that. I should implement the ones that I am using next and take out all of the old forms.

When you write about what you are doing you are not doing it. Thus sometimes it is better to make the code as opposed to writing or talking about it. However in this case I also want to push myself forward in a way that will give me results. Documentation is a useful thing, but only if you use it later. I have spent a lot of time writing up documents and I also need to spend time later reviewing those documents. But just by writing them I then have an idea of what I want stewing in my mind. And I let it stew and then decide what resources I want to create. And I can't do too much, so I try to do things simple and lazy. As lazy as is allowed to get things done. This usually means that I am working hard, but not too hard. I find that the documenting then gives me a metric to look back on later to see if my ideas have come to fruitition. Also it will give a guide for others who want to learn to code too.

A Little bit of Computer History

I have learned over the years that the efficient solution is often a mirror of the one that is syntactically elegant. Problem solving in programming means that one access the needs and requjirements and make a solution that will perform for the future. This means that one does not just code but one must also design. And as a designer one perhaps doesn't get things working right away because there are mistakes that can be made. So we sit paralyzed and worried that we will blunder in some huge way and make something that will fall down under it's own weight.

But how can we ever really know what the results are unless we cut the code and run it. We have to.

So, the proper way to proceed is to create designs that are simple enough as to not make their construction arduous. The undaunting task will rarely be finished. Make designs that are extensible and not brittle.

Solving programming problems means that there must be a problem that is to be solved. Design choices are huge from the language that you use to implement and your choice of classes or modules to link in. There is the theory of the generic solution. However the soltions that run on people's computers are always specific to those machines. It can not be any other way. A port to a different platform might be as simple as doing a compile, however that isn't always so as the devices or ports of the two machines might not be compatable. So we write our systems with dependancies. We can't do it any other way. We abstract desireable functionality as modules. And then we say that our package depends on these modules.

As an example of what I means let me discuss the ethernet adaptor on a PC. It was not too many years ago that the TCP/IP interface on the personal computer was an expensive option. Only workstations would have such networking capability. It was non-trivial to have TCP/IP and it was also expensive. Not only that but most people didn't even know what it was. But now can you imagine anyone not having it and that not being an issue? How is it so that we are able to have it in so many flavors on so many machines including handhelds and also wireless connections from cell-phones and cameras. And it all works reasonably well together. Why? Because the concepts have been standardized and everyone uses an interface that is similar. And thus we have modules and packages that we know can work as long as they have the packages that handles TCP/IP also available.

And if you pull a 15 year old computer out of a junk yard and it still boots it is non-trivial to make it work with TCP/IP. Thus we are light years away from that.

What other interfaces are as obiquitous? They all spawn from something that was UNIX like. I suppose that UNIX was the eccumenical operating system. It said here are some well defined protocols. If you implement in your OS all of these just as we specify then you too can be call your operating system UNIX. The 'U' means Uniary.

So there it is. And if you don't know about UNIX then you don't know about the history of operating systems. All that came before was unified and there became this collective culture of UNIX. And if you went from machine to machine even if it was from a different vendor, the UNIX gave you a common base so that every machine had common functionality. This was the base onto which all other functionality could be built.

Adding Slideshow Behavior

I have seen a sight that has a slideshow like effect. This sight does a refresh every so many seconds to allow for the stuff to animate. I don't remember where this was but I should find it and see how they did it.

Just from looking at the url and the get data that is provided I intuit that the system will do as follows:

spawn a php script that goes into a loop and keeps asking for the same page.

I am not sure though, so maybe I should look up somewhere how they do this. I examined code that I found on the web and they use javascript to do the slide show. I am sure I could look something like that up or just look at the page that has the show to see what the code is.

Screen Shot 12_01_2004

Here is a screen shot from today:




Better behavior for gallery style and thumbnail style changes

To provide faster response for huge gallerys I could somehow have the server hold a script as a module that would already have created a file list. For the display of tiny_pics which currently has ovver 15000 images, the lag while the system collects the directory array is significant. What are the ways that I could solve this?

Perhaps I could have it so that the links are already determined and in a form that is already saved. Then the module that loads to show the gallery would also load this data as hard coded structures written in php. My scripts could naturally write this code too. But the page being served up for users wouldn't need to keep generating the same list over and over again.

This redefines what we call a gallery. Right now I just find all files within a directory tree with a graphic extension.

Or, the list doesn't need to be saved as a file (though this is a good idea) and could just live in memory. IE: my scheme would still work even if there were no place to store a file but just scripts.

Theory of thumbnails

The idea of having thumbnails is that display of tiny images is much faster than if the browser or viewer has to scale a huge image to be small. Also thumbnails look good enough to get a general idea about shape and color but are unusable for other purposes. I have documented elsewhere my scripts for creating thumbnails. These scripts produced three different sizes of thumbnails. Thus the way that the php scripts served by the web page must respond depends on the type of thumbnails that a gallery contains.

This means that the design of the viewer must also have a way to provide the file list or the image list for use by the link array of images or buttons. This means that a gallery has a thumbnails policy. It might be that every picture also has associated smaller pictures. It might means that all pictures should be displayed in the way that they are already. This extra complexity must be added to our model and accounted for.

Commenting on Pictures

Viewers can be allowed to comment on pictures. This may be global for all users for some pictures and also a login thing for other pictures.

I will create a file called

comment_tender.php

which will be the interface for allowing the addition of comments and the maintainance of comments.

Rating Pictures

Viewers can be allowed to rate pictures.

There could also be an advanced rating system that would be like a poll. This would be a fun feature that could be customized. It would be a variable, moderator controlled form.



Putting pictures into categories

Pictures will be placeable into categories.

Creating Categories

Categories can be created by passworded users.

There could be a raw version of things that lets people use pictures and put up a very free form and artsy category.

"We have created this category and we invite you to enter your picture into it."

Cropping Pictures

a reasonable URL POST/GET interface can be created that will allow the cropping of images in galleries. When the user posts to recieve such a page all pages generated by the script will have all html elements in the same place. And so if the connection and the server are fast enough one could thus create a box or a mask on the picture by issueing repetative clicks on the image. The image would, of course, have to be an Image Control that will allow for this. This seems like a good use for the

Rotating Pictures.

The controls for this should allow the user to specify that a picture needs to be rotated before being displayed. This would be a way for a user to create a thumbnail that already has this rotated capability.


masking Pictures

Masking will be a necessary part of any advance web-creations interface.

Draw

The Masking is essentially a drawing capability that creates a mask. So if I have masking I also have drawing.

Layer

Any non-trival drawing program always provides layering capabilities. We should have this too.

Tiling

Tiling of images can be easily achieved by setting background images properly. It would also be cool to have advanced tiling so that things like the plueperfect square (which is discussed and implemented elsewhere) could be easily created through a simple tiling description. Do we thus introduce yet another type of language?

Storing of Web Created Mosaics

The storage of these Web Creations can be accomplished in many different ways. As the user creates the masking and laysout the tiling of various images, the artist/user will have to be able to view this to continue along in the process. The list of elements needs to be available to the user, and in this way the user needs to be able to edit, select, enable, show, hide, scale, crop, and rotate various elements.

How can this data be stored? It needs to be stored in a way that allows the server to efficiently spawn the pages. There are various output categories that need to be available to the user to make the process of creating these pages efficient. But also the user needs to be able to circumvent the clicks if the user so desires.

I suggest that we could support various different mechanisms for storing web-creations. I also suggest that we should also provide ways to translate the storage formats.

It is important that an open-source storage mechanism be used. That way we will not be prey to litigatious sharks.

Web Creation Scripting Language

What better web page creation scripting language than html/xml? With this we could very easily create a way to pass valid creations up to the server so that the server can show these to the public at large.

With an open architecture it is not a hard matter to have a different way, as well, for the users to send the server data. And there is not reason that we can't also have different versions of scripting languages as well.

2 December, 2004

I have implemented a lot of the various Form Control types as php classes. Currently I don't have all of the various types, but what I use.

Also there can be different ones that extend the common ones, like one for doing dates like I have on the page already, but I have not implemented this as an HTMLControl in php.

The full range of all html functionality is a good goal. But also there will be forms of output that are very easy to use with terse input. And the various tools will be selectively available to users of the web page.



Here are the classes that I don't have yet:

1. The date selection control like I have on the page already could be instantiated as a class.

2. File Control. I do need the ability to upload files. But I have not looked into this yet. I should study the issue a little more before I proceed.

3. ImageControl. This type of control returns an x and y valued based upon where in the picture the image was clicked. I can see a lot of use for this. The standard recommends that this type of a control not be used in cases where the effect of a click is very different from region to region on a map. Why? Because of the blind. And so for images that have mulitple effects and that don't just want an x and a y coordinate from a graphic, the standard recommends that a client-side image map should also be considered.

4. Client Side Image map. I had not considered how to use this control. And now I see that I probably need this.

5. Form Spawner: This will be yet another wrapper class that will contain the controls and then write the html for the form. It would have to also be able to know the styles and apply them as needed. I need to learn more about style sheets in order to use the very simple way of attaching styles. Really this means that I should read and see examples.







In addition some of the simple controls were implemented and not yet tested: Password,

what else? I should have a sample page that has all of the controls on it. In order to see if the data is coming back I have a simple print_r statement for the $_POST and for $_GET arrays that I uncomment. I could also place a button that would just be a test button and run the script in test mode.



I need to further experiment on placement of elements, formating, style, and also handicap accessiblity features. Where ever there can be an ALT there should be to help anyone who has interest and has to use a non-graphical browser.



After I have implemented all of the various types of controls I will tweak around with different types of layout classes and see if I can shake out anything reasonable.



It seems to me that the real way that people do web pages is to create little pieces that they insert into other things. I have the little pieces as far as Forms go. I should do some other classes for content.



Here are some other possible class ideas:


class htmlParagraph

class htmlList

class htmlTable















The above drawing shows a very simple minded view of the system. This is at best a characteture, but from it you see the very simple process of the user creating and submitting content to post at a site. Naturally this implies the ability to serve the content back to the user.

Saved Content

How is it best to save content? Would it be best to save it in as close a way to how it was submitted? It seems to me that it needs to be somehow saved in a way that makes it easy to serve it back to the submitters and whoever else is interested, or what is the point?



Also the question about editing what is existant is also important. Each display scheme for the users needs to have certain data associated with it. Also, what if someone has already created html content, and they want to post it. So can I just have them upload it and then I could just serve it back with a curl call.



The solution of how to save the data is not simple, yet it is not difficult either. So many people say to use a database, but that should only be for some things. In the meantime I don't have any plans for this. Though I am sure that it is a good thing to do. I suppose I should start to figure out how to save pictures and comments. I thinkt hat there are some things that are good for databases, others are not. It would be foolish, for example, to save fullsized images in a database. Also it would seem foolish to save html content like this, too, as this could be just saved as xml in a directory with a reference in a database if needed. But in a very real way there is not a compelling requirement to have a database.

Mapperilli

Mapping is a part of what I want to do with the website. I believe that locational mapping of photographs will be a desired feature. ImageControl's will be a very real part of this. I want to have a mapper that will let the user put items into a database that will be the locations on the map. The database type might just be of a propriatary nature, but something like postgreSQL or mySQL would also be possible.



I want to set up a spider to get maps off of a public site. These maps will be in a format that I will not put on the web. I will transform them to png's and then make a mosaic of them for my website.

I am up for taking on this ambitious project. The following steps are part of it:



1. Go to the website and get copies of the GET commands from the URL's of some of the more interesting servers (free and public)

2. Decode these GET commands so that I can automate the creation of the request much like I did with the deed spider project (which I was paid to do).

3. Set up Bluesky to run the spider. I will get maps of the whole country.

Should I mix up the algorithm so that the requests are staggered and also psuedo-random?

If I make a whole long list of all of the different possible rectangular regions, I can figure out all of the url's in the first place. I could then just have the list and do the GET's for all of the various maps, do a random mixing of them, and then get them all one by one. One by one, and the delay between each will be random.

But won't the server be able to see that I am getting a lot of maps? I suppose it might, however if I do it slow enough, then things will be OK.


http://ims.cr.usgs.gov/servlet/com.esri.wms.Esrimap/USGS_EDC_Elev_NED?&request=getMap&version=1.1.1&bbox=-73.73867%2C42.02719%2C-71.75807%2C43.24367&srs=EPSG%3A4326&layers=US_NED_Shaded_Relief&format=png&height=597&width=972&bgcolor=0xFFFFFF&transparent=true&exception=application%2Fvnd.ogc.se_xml&styles=


http://terraservice.net/ogcmap.ashx?&request=getMap&version=1.1.1&bbox=-73.73866%2C42.02719%2C-71.75806%2C43.24367&srs=EPSG%3A4326&layers=DRG&format=png&height=597&width=972&bgcolor=0xFFFFFF&transparent=true&exception=application%2Fvnd.ogc.se_xml&styles=


http://terraservice.net/ogcmap.ashx?&request=getMap&version=1.1.1&bbox=-73.73866%2C42.02719%2C-71.75806%2C43.24367&srs=EPSG%3A4326&layers=DRG&format=png&height=597&width=972&bgcolor=0xFFFFFF&transparent=true&exception=application%2Fvnd.ogc.se_xml&styles=


http://ims.cr.usgs.gov/servlet/com.esri.wms.Esrimap/USGS_EDC_Elev_NED?&request=getMap&version=1.1.1&bbox=-72.80386%2C42.60134%2C-72.69284%2C42.66952&srs=EPSG%3A4326&layers=US_NED_Shaded_Relief&format=png&height=597&width=972&bgcolor=0xFFFFFF&transparent=true&exception=application%2Fvnd.ogc.se_xml&styles=


http://terraservice.net/ogcmap.ashx?&request=getMap&version=1.1.1&bbox=-72.80386%2C42.60134%2C-72.69284%2C42.66952&srs=EPSG%3A4326&layers=DRG&format=png&height=597&width=972&bgcolor=0xFFFFFF&transparent=true&exception=application%2Fvnd.ogc.se_xml&styles=


This gets me a pretty good map:

http://terraservice.net/ogcmap.ashx?&request=getMap&version=1.1.1&bbox=-72.80386%2C42.60134%2C-72.69284%2C42.66952&srs=EPSG%3A4326&layers=DRG&format=png&height=1194&width=1944&bgcolor=0xFFFFFF&transparent=true&exception=application%2Fvnd.ogc.se_xml&styles=



I doubled the height and width from the one above.


Using a spread sheet I got a pretty good map that is a quarter of the size of the one above, which has double the pixels as the one above that. I suppose I can try to get even better resolution.

-72.74835%2C42.63543

http://terraservice.net/ogcmap.ashx?&request=getMap&version=1.1.1&bbox=-72.80386%2C42.60134%2C-72.74835%2C42.63543&srs=EPSG%3A4326&layers=DRG&format=png&height=1194&width=1944&bgcolor=0xFFFFFF&transparent=true&exception=application%2Fvnd.ogc.se_xml&styles=

This one comes back with an error over the resolution being too large per pixel. Here is the error:



-72.776105%2C42.618385


http://terraservice.net/ogcmap.ashx?&request=getMap&version=1.1.1&bbox=-72.80386%2C42.60134%2C-72.776105%2C42.618385&srs=EPSG%3A4326&layers=DRG&format=png&height=1194&width=1944&bgcolor=0xFFFFFF&transparent=true&exception=application%2Fvnd.ogc.se_xml&styles=


Here is the error:

Service Exception Report

Found 2 errors


DRG Target Resolution '1' cannot be less than 2 meters per pixel
Did not find a layer to process

December 6, 2004

Implementing Tables

We have created examples of tables which I wrote as raw html. These started from examples that I found in the W3C specification that I modified and crafted into other things. Also I have the pluperfect squares example in two different forms. I have documented this elsewhere.



With the model that I have created for the forms project, I am now doing similar things with html tables. And I have created this even as I am working on the forms as the part that is designing and documenting might be inspired work at one moment and at another time I might decide it is better to implement. And if anyone is reading through these verbose discussion on the creation of these classes knows that I might spend some time talking about things that are better just implemented. Or I might describe some alternative approach that I have not implemented. It seems to me now that this is a good thing to do for myself to keep a handle on my ideas. However, there is a level of indulgence in writing too much on mundane design descisions.

I believe that there is a more logical class design than what I have implemented so far. I could have an htmlEntity class that would be the parent of all of the classes that I have implemented. This would be a super container for pages.

I do not currently have this design. Implementing it may make sense. If so then it will be a flurry of activity that will have terse documentation other than the code that I produce. After all it is important to me to start actually hosting pages and too much docuemntation is just a waste. Just enough and no more. Oh no, I am repeating my self.


Here is the pluperfect square:






Adding Cells

I am setting up the table so that each cell can contain and entity. next we must consider that the entities need to be laid out and that lay out is a complex process that involves making sure that rows and columns are sized correctly. It was useful to create all of the sample tables sot hat I could see how these things actually looked. And Iw as able to see a lot of different things that I could do. And so there needs to be a lot of flexibility.

However, there should also be the option for canned tables which will be children of the main type of table. Or the table can get a policy that it uses when it displays that allows the setting of various parameters, or even allowsfor a more logical layout.



The variable size of content drives the way that hte browswer will render the content. and so there are tricks that we canuse with html that give us some control over how things will look. Also we can set up for tables that we then insert into other tables. Also weneed to allow for building forms into tables as well.

All of these issues and many others make designing of these classes interesting. I am making tradeoffs to get to the happy place of having an up and functioning website.

Images

the following is a snippit from the pluperfect square:


<IMG class="runtest3" src="images/50.png" height = 8 width = 400>


That same image is used 50 times, and all together that makes up the 50x50 block. And so we see that it isn't that the cell wants to contain an image. Instead it wants to contain a reference to an image. That way we only have one for as many uses as it is needed.



The copy by value or the copy by reference is an important thing to get right. If I am creating 50 of these image objects that wouold be a little bit wasteful. Instead I should make one of these and then create 50 references to it. It is probably best to handle all of the various contained entities like this unless the contained entity is going to be changed by the container.

Efficiency Discussion


PHP is a scripting language and if things get too unweldly and large then I am sure that performance must suffer. But only if there is really a problem will I need to optimize the performance.

In the case of rangling over architecture for throw away scripts it seems unimportant to bother to worry about what the actual image will do in memory. And to just make one times things output pretty html that is thus static and forever saved and viewed in that configuration, the efficiency of the code is second to the form of the output that the code produces.

In the real world of serving up thousands of pages a second for thousands of users, the efficiency of the script is important.

There are obviously some things that can be greatly improved through a study of efficiency issues. However, resources are cheep and processor power is inexpensive. So I am not going to worry too much.

The price of copy by reference is so cheep and the ease of implementing this so inexpensive that this is a no-brainer type of enhancement. But the importance of this is low.

As a software engineer you should always think about efficiency issues. And you should always have a way of knowing what matters and what doesn't matter. For things that run once and never run again, then don't worry about efficiency. Startup routines, for example, don't have to be overwhelmingly efficient. You can hold off doing important things until later when they become important. A detailed study of an important process would have to include some kind of ghant chart that would have all of the critical paths and when they are started.

Just as in scheduling people to do tasks walking around in the air, starting a computer process has to be planned through and the various things or tasks that are to be accomplished must happen in a logic way.

So we have, for example, the sendmail utility waiting for the network to start up. And we have to be at run-level 3 (on a redhat box) before we can proceed to run-level 5 and do a startx.

Great, all of this is great and logical. It has all been thought through before.

How do you know when you have problems with efficiency? You will know because it will become a problem. If you tire of waiting for some slow thing to happen then you will think of a way to make it faster. What are some simple things that you can do?

Profiling is a common thing to do. Another is to see when memory gets declared and how it persists. If there is a lot of copying of data between functions, than there may very well be wasted cycles copying values that aren't going to be changed.

On the other side it is simple in C to pass an integer on the stack than to pass a pointer to it because they are the same size.

What rules can you follow? If I was asked to do this for someone I would first investigate the size of all of their structure definitions and if they were too large.

I would see how they pass thier data around. I would investigate how they define their structures. In PHP, whihc is a garbage collected language, these issues are as critical. However, if you are running scripts live on the internet they should be efficient if you expect a lot of traffic.

I will continue this discussion another time.

A Style Class: Obvious Things Become Clear

I have implemented a style class called HTMLStyles. I also have a container for these. This class is very simple minded, but at the same time highly advanced. I just totally give in to PHP ease and idiom and create an interface that is terse and simple. The styles are name/attribute pairs stored in an array.

This obvious design can be extended to the other HTML controls. I can have every one of them have name/attribute pairs and so if they need attributes then these will be in there in a very simple way.

This frees me from having to design for every possibility of HTML allows. This would be a very brittle interface for amatuers. I would not recommend that this idiom be used except by someone who understands that HTML is very forgiving and so the interface has just about no error checking at all. Why would it need it? If it doesn't work nothing bad happens except the page is not displayed as wanted.

And so as I indicated earlier there are times when obvious things don't come forward right away. I will see how this behaves and roll it into the next revision of things. I will probably just use this idiom, and it is so obvious that it should have been clear to me in the first place.

Also the saving of data into persistent units is probably best done with XML like output. That kind of simple formating gives a lot of leverage.

There are very good reasons that content is being wrapped up with tags. The whole idea of name value pair is very basic. Using this idiom probably means that I will create an attribute class that will be just like the one that I made for the styles. And there will be different types of show functions for this, for the style list output and also for the attribute list output.

How I will structure this I am not sure. But this will take care of any repetative code that might crop up.

It will also mean that the names of attributes must be correct. I could create a verifier class that would attach to an object. The verifier could be part of the HTML class and would have acces to a list of valid attribute names and also verifiers for these.

HTML is so friendly in that anything that it doesn't understand it ignores.


Name Value Pairs



the idea of a name attribute pair goes way back to the beginning of programming when a location in an assembly language program was given a name. That way one could write other statements that allowed an easy way to go back to a specific place in the code. From that we had a lot of good things happen. By the time XWindows came along the idea progressed to allow for properties. C programming allows this kind of thing by allowing the programmer to define a structure definition. And then the structure would be populated with values. PHP also allows this.


PHP is so loosely typed that it allows for name value pairs in a very easy idiom. However, this ease is also a peril. A typo or other mistake will result in the code not working as expected.


I have implemented the name-attribute pair idiom with a family of classes that allow for creation of webpages that use the name-value concept very well.

As the idiom evolves I will consider that there are certain attributes for certain types of elements of HTML that are important enough to be an integral part of tool. For example the background tile image of a tablecell, or the colspan or rowspan attributes of a complex table. And for that reason perhaps it is useful to have these asstock attributes.

Or at least we could provide a way to verify that the elements are part of the standard, so if there was a typo then the programmer would know right away. Also we could filter out stuff that isn't needed.

There are alot of different ways to do the same thing.

The idiom of having function calls that make the page seems like a good one at first until one looks at how the page is spawned and what happens in memory. In C it is important to get these things correct to have efficient code. I have seen a lot of designs that looked really easy to understand when I was viewing them as code, but the system was hobbled by a lot of copying of data around in memory either on the stack or into arrays of elements that are then used by the code.

In the case of an idiom that has set functions, and that the data is the same every time, it is an easy thing in C to create an array statically that is compiled into the code and is part of the program. This is a very common idiom for C programming to make sure that arrays have valid intial value definitions. In the case of a strict idiom that is trying to not have to deal with stupid programming problems like memory leaks or allocation of too much memory, there will ideally be three of these structures, one of which is statically defined with values already in it, one that is loaded with data that the user has saved from the last time the system ran, and one that is used as a buffer area into which one writes changes while editing and then saves to the second array. With PHP scripts all of this is over design. The other design is one that must be reliable and easy to change and understand by the programmer. PHP scripts do not need such elaborate protective memory policies.



However, as I have stated many times, efficiency is always important when things will be run a lot. Also the predefinition of structure is valuable in that it gives a common place for the data that will be used to be so that there is an easy way to review and modify this data. When initializations are flung throughout the code, then modification means that one has to go through all the code looking for the various places of initialization.



consider the following snippit of code:



$test_radio_ctrls = new RadioControl("test_radio_ctrl","","","","","Test Radio Control","","");

$test_radio_ctrls->add_html_ctrl("dude");

$test_radio_ctrls->add_html_ctrl("dame");


$test_radio_ctrls2 = new RadioControl("age_selector","","","","","Test Radio Control","","");

$test_radio_ctrls2->add_html_ctrl("child");

$test_radio_ctrls2->add_html_ctrl("adult");


$test_checkboxes = new CheckboxControl("known_OS","","","","","Operating Systems I know about","","");

$test_checkboxes->add_html_ctrl("MS-DOS 1.01-beta");

$test_checkboxes->add_html_ctrl("Red Hat Linux 5.4 kernel version -12.345.67-89_10");

$test_checkboxes->add_html_ctrl("MF-DOG");

$test_checkboxes->add_html_ctrl("OSE");

$test_checkboxes->add_html_ctrl("pSOS");

$test_checkboxes->add_html_ctrl("vxWorks");

$test_checkboxes->add_html_ctrl("CORE-2.3");

$test_checkboxes->add_html_ctrl("krUMM-EE beta-12.232");


The above code is taken from an on going project. While the idiom seems to be solid enough, and this kind of idiom was promotted a lot in the industry with set and get functionality and object oriented programming, the idiom is prey to the problem of definitions being hard to manage.

And so, though this idiom is OK and works OK, there is another one that will also work.

That would be the one where there are structures defined statically and they load when the program loads. There is thus no need for set functions to load these things, but only to modify them after they are loaded. One will still need to store the strings and values somewhere, but one doesn't need to have them locked up into these set functions as hard coded strings and values.



Content

The important part of the pages is the content. There is a level of content being able to be constructed from the various objects that I have created to represent the various elements of structured HTML documents. It is obviously also desireable to save these structured pieces of content. But even in the most basic of widget libraries there are always stock objects that are useful just as they are, no layout needed, the layout is already there. And later if hte programmer wants to create something more advanced, then the user can do this. I have programmed with these kinds of libraries a lot. To do this kind of thing one needs to be a programmer.

What are some of the stock pieces of content that could be provided right away at the time that the site goes live. I believe that there are a lot of different things that we can provide. Here are some:



Content Types



Content Types

Purpose

Article

The content is an article.

Definition

The content defines a term or a word.

Map

The content is a map

Form

A form for entering data

Title


web page.


image Selection/Display


Navigator


Privacy Policy


Logos




Reference

A Reference provides information on the source of content that is from books, newspapers, magazines, radio or other webpages. A Reference is design to give credit when credit is due.

Table

A table has the capacity to lay things out for the user in a logical way. there are many things that tables can be. Tables can be placed inside of tables. This is why they are such an important part of HTML and most word processors.





For all types of stock content there needs to be a way to create, edit, and store the content. This should be in the form of pages, scripts, client side applications, or simple uploading a valid php or html file.

There is no getting away from the need to create complex scripts and to allow for the inclusion by website designers 'script like' behavior. It is, of course, a necessary component that the scripts behave in a useful way. I suppose if people are paying to host a page and they provide non-sensical scripts, then I would suppose that these scripts would mean that their website wouldn't work. So why stop them from basically misusing the product?

But if it is hard to create the pages and have them work, then it would discourge site holders and perhaps compel them to perhaps stop paying for it.

For some of the types of content there can be the ability to edit, add and save content of that type.



Tables and Content

Tables in HTML are the most important type of content for layout of a webpage. If one has a trivial page then tables need not be used. However, it seems that pages are always built from pieces and the pieces need to be laid out in a specific way. One can have content that is just paragraphs with their associated titles and levels of indentation. Also there is the ability to align and change fonts. All types of content that does not include tables can be saved in a very easy way. And then with a CURL statement calling local host, the content can then be inserted anywhere into a page. With tables this content can be laid out in a very percise way.

The following is code that will output a branded page. Most likely I have discussed this elsewhere. Here is the code:



function output_branded_page($image_url ="webresBillP.html",$page_description="Here is the page you requested:")

{

add_html_header("a6CaF0");

add_bg_color("a6CaF0","/images/paper1.png");

print( "<Table border=\"0\" align=\"center\" cellspacing = \"0\" background = \"/images/paper9.png\">");

print("<Caption>".$page_description."</Caption>");

print("<TR border = \"0\">");

print("<TD width = \"5%\" align = \"left\" background = \"/images/paper9.png\">");

print("</TD>");

print("<TD width = \"\" background = \"/images/paper9.png\">");

$c = curl_init("http://localhost/".$image_url);

$subsequent_page = curl_exec($c);

curl_close($c);

print ($subsequent_page);

print("</TD>");

print("<TD width = \"5%\" align = \"right\" background = \"/images/paper9.png\">");

print("</TD ></TR></Table>");

add_contact_url("contact2.php","APC","Contact Information for Amillia Publishing:");

add_copyright();

} // end of output_branded_page



If you copy this code it will not work for you as there are some calls to functions that I am not including here. However, as you can see from the code, this is a very straight forward use of putting a piece of content into a table. The table has precise alignment. The browser will resize the content when the browser is resized. And so the page should behave well no matter what the screen resolution. Here is a screen shot:






And here it is resized:




Not bad. In addition the content was created using OpenOffice. It is my resume. I provide the file and then I open it a CURL call and output it through the php script. It works fine. So basically any content at all can be placed into this.



And so we see that we are able to do these simple things, and we are able to insert content. From that we thus need an interface that will allow us to do this simply. And it should be one that allows for the insertion of content within tables.

The basic part of a table is a cell. And each cell can itself contain a further table! That makes the layout of content a little bit easier.

And so we need to be able to have various cells that are for content. Other cells are for making sure that layout is correct. For example it is straight forward to size a table by having cells at the edges that are of a set size.

The browser is very much in control of how things are laid out. If we set up our tables in very specific ways we can have a lot of control in how the browser will layout the data.

It seems that we can create table templates that we can then have as stock elements on our pages. For example there are some types of layout that are very common, like the days of the week laid out in a row, or the days in a month laid out like a calender.

In Linux we can type the command cal and we get a very pretty lay out of a calender as follows:

[bperil@bluesky log]$ cal

December 2004

Su Mo Tu We Th Fr Sa

1 2 3 4

5 6 7 8 9 10 11

12 13 14 15 16 17 18

19 20 21 22 23 24 25

26 27 28 29 30 31


[1]+ Done kate

[bperil@bluesky log]$ cal 2004

2004


January February March

Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa

1 2 3 1 2 3 4 5 6 7 1 2 3 4 5 6

4 5 6 7 8 9 10 8 9 10 11 12 13 14 7 8 9 10 11 12 13

11 12 13 14 15 16 17 15 16 17 18 19 20 21 14 15 16 17 18 19 20

18 19 20 21 22 23 24 22 23 24 25 26 27 28 21 22 23 24 25 26 27

25 26 27 28 29 30 31 29 28 29 30 31


April May June

Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa

1 2 3 1 1 2 3 4 5

4 5 6 7 8 9 10 2 3 4 5 6 7 8 6 7 8 9 10 11 12

11 12 13 14 15 16 17 9 10 11 12 13 14 15 13 14 15 16 17 18 19

18 19 20 21 22 23 24 16 17 18 19 20 21 22 20 21 22 23 24 25 26

25 26 27 28 29 30 23 24 25 26 27 28 29 27 28 29 30

30 31

July August September

Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa

1 2 3 1 2 3 4 5 6 7 1 2 3 4

4 5 6 7 8 9 10 8 9 10 11 12 13 14 5 6 7 8 9 10 11

11 12 13 14 15 16 17 15 16 17 18 19 20 21 12 13 14 15 16 17 18

18 19 20 21 22 23 24 22 23 24 25 26 27 28 19 20 21 22 23 24 25

25 26 27 28 29 30 31 29 30 31 26 27 28 29 30


October November December

Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa

1 2 1 2 3 4 5 6 1 2 3 4

3 4 5 6 7 8 9 7 8 9 10 11 12 13 5 6 7 8 9 10 11

10 11 12 13 14 15 16 14 15 16 17 18 19 20 12 13 14 15 16 17 18

17 18 19 20 21 22 23 21 22 23 24 25 26 27 19 20 21 22 23 24 25

24 25 26 27 28 29 30 28 29 30 26 27 28 29 30 31

31


Using the cal program's output we can craft a table based HTML calender.


Another type of table that we could craft would be based upon the data in a spread sheet. In a spreadsheet cells are named with a row and a column. Data will be line by line. The first element on a line is the cell identification. For example it would be like this:


A1 "data in cell A1"

A4 "data in cell A4"


We could also examine how Open Office lays out data. The files for Open Office are zipped up when they are saved. In order to see what the data looks like in its raw format we must unzip the file. Then we can view it as text.

Here is a ls of the resulting files:

-rw-rw-r-- 1 bperil 5327 Dec 23 23:39 sample spreadsheet.sxc

-rw-rw-r-- 1 bperil 5327 Dec 23 23:42 samp.zip

drwxrwxr-x 2 bperil 4096 Dec 23 23:42 META-INF

-rw-rw-r-- 1 bperil 5680 Dec 24 2004 styles.xml

-rw-rw-r-- 1 bperil 7685 Dec 24 2004 settings.xml

-rw-rw-r-- 1 bperil 28 Dec 24 2004 mimetype

-rw-rw-r-- 1 bperil 947 Dec 24 2004 meta.xml

-rw-rw-r-- 1 bperil 2803 Dec 24 2004 content.xml

Doing this provided 4 files, and one of these was a content.xml file. All of the files were xml.

I suppose that we could use the same kinds of formats for what we want to do. I quess we could use a DOM parser, or some such XML thing.





Process of System Design in the age of Software Packages


Obviously the commoditization of software modules does not translate into the commoditazation of money-making software configurations. If this were so the Internet would have no growth. But we see that the plethora of useful applications (web-sites) increases. All of this web site development is presumably funded by some scheme that we perhaps can not fathom. Here are some types of pay websites:

  1. Government Budgets

  2. Schools/Educational/organizational

  3. Individuals for their own use

  4. Businesses


Naturally what I present is a terse list. Obviously there are many uses and we haven't thought it all up yet. This is a growing field.

Do not be confused by the Internet stock bubble which was a product of ostensibly corrupt accounting and finance industry insiders. The Internet as a valid solid backbone for secure cross-system communications and control. The Client/Server model, and specifically the TCP/IP stack based architecture maintains an open and modular specifications that lend themselves well to maintainable and scalable architectures that have interoperability.

It would be unfortunate if there were only proprietary schemes for interprocess communications. The explosion of uses becomes mostly network design and the idea of an application is removed from the fuzzy heads of programmers and emanates from the heads of users. As programmers are the ultimate users, naturally programmers drove much of tool design. But now we see that it is the users that are really in the seat of this race car. And why? Because only the users being there means that a website can be profitable. If there are no users then there will be no website.

Sure we can say that this is the tree falling in the forest argument. I am not stating a manner of absolute truth that will be held up in courts of law. I talk about an absolute truth that business models will use to say that this or that type of tool or methodology will or will not be available in some future time. If no one uses something then it is eventually deprecated and thus abandoned.

The original schemers in the marketing departments of technology giants of the 1980's had a lot of things going their way. They were free to set up all kinds of nefarious business arraignments because no one read the fine print nor did anyone understand what that print meant in the first place. There was a sudden and sweeping legal maneuver which may, in long term, be seen as invalid. Instead of software being owned, it was leased. And the developers could say when there was or was not an upgrade to said software.



At this time this arraignment is counter intuitive. Anyone who needs to build a system that they must have on line in ten years (which isn't really that long a time) needs to know that they have a migration path. And they don't want to this to be a path over a cliff.

We are not cattle being run in a Pleistocene era game hunt through fences and to the end of a mesa. We now get where we need to be and get where we want to be in ten years. This happy place of future technology does not have a ruling class. It is an egalitarian fairy-land of continuously over extending paradigms of flooding imagination of equality and possibility.

But evolution is a kind system in the computer age. Things that don't work are left by the side of the road. Software is never finished, it is abandoned. There is so much of it laying around in the dead devices that we throw away never to be viewed or changed or updated, but just discarded.

But real systems need to be available in ten-fifteen fifty years. We need to know that if we spend the time to measure and record anything that we can know what these measurements are years and years later.

Naturally anyone who thinks about this long enough starts to see that this is a fantasy. Real societies don't preserve all things, but only those that have cultural or scientific value. All of the rest is discarded, or ignored. There is no persistence of all things for all people in this world. The bad ideas are only preserved, usually, as a way to say 'here is what we could have had.' Or maybe as a way to say 'See how different it might have been'.

And also we don't always know which ideas are the better or the worse. For example I did a writeup on how to do the file transfer of the pictures from my camera to a computer hard drive, and then to a CD that I can burn a bunch of, and that I can put on a shelve, or even sell if I so please.

But maybe the better idea is that there would be a camera that would do all of this for me. And so someone who buys such a camera would never even know how the stuff is transferred. They would never model the system as a USB device from which files are transferred. These folks might not even have the concept of a file system at all. They would be free from the idea of permissions and file read/write bits.

the table can be setup so that it has all of the cells shown when the show function is called. Or the first cell and the last cell can be set so that only part of the table is shown. Default will be tht the whole table will be shown.


last cell


first cell


The row heights and the column widths can be set by the user. The way that the HTML is rendered by the browser can be set, and different browsers will render in different ways. In the case of being able to set pixel width for the rows, the first row can be used to set the widths of the columns and the first column can be used to set the height of the rows. As this is the case, there will not actually be any data in the first row, or in the first column. The html that table class writes will have a first row that will be a height of one pixel, and will have a first column that has a width of one pixel These will be used to size the table.



Link array.


I have worked on the table and now it is doing reasonable things. Now I want to make an array links. These links will be set into a table.



WebArticle


Now that I have a reasonably full-featured set of tools for designing web pages, I need to demonstrate the use of these to craft useful web creations. I have already created a poem class. The next should be a full-fledged article class.

The article will have a style collection in that the various styles will be necessary. The default style collection should be useful by itself and ought to also brand to some other required styles so that each website is required to have an individual look.

We need a default list of styles not just for an article, but also for a site in general. A site could have hundreds of articles. It might also be useful as the styles at a site change to have the styles of articles be retained for the article in question.


Article title

Image List

topic list

links list

a web article can very obviously have a list of links.

footnotes list

Footnotes and references are an important part of scholarly publication. Their enclusion allows the readers to check the facts themselves. They also show that the author has done his work.

Copyright Notice

Authors and Contributors

Authors

Contributors

Date of Article

The article was first published or posted on this day.

Affiliation

An author is often affiliated with a news service.

Text

tables

graphs and charts

Element Ordering


Article Publishing/Authoring Tools



It isn't enough just to have the above class that allows one to create an article. One would like to have a way of authoring the article. Obviously the best way would be an integrated methodology that would allow for easy and secure creation of the desired content. But before one designs such tools one needs to perfect the tools on which the more advanced set is based. However, there is a lot of designing that can go on in the case of authorship tools. And then when we get to the end of that do we create a set of authorship tools that allow you to author other authorship tools? The list goes on and on.

Moving Forward with Design

For this reason it is best to move forward and do the needed work. Don't always move the bar higher. Make it a different bar. That way you can move forward in your designing by planning a different thing for the future. Modularity is necessary to abstract the various tasks into a small-enough managable chunk, some small piece of work that is necessary and a requirement for some small and well-understood entity.

From the various small pieces one constructs other pieces. Each new piece or module has its own well-understood purpose.

Purpose of Modules

A module will have a purpose, and it will also have an effect. The purpose is something that lives in the mind of the user so that the user of the module can do some real thing in the world, or in our case just in software which can project out into the world. But a module also has an effect. The effect of that module, what that module does, might make it obviously another type of software creature. This software creature that it also is comes0 from the idea that someone other than the original creators or users of a module will see the module as something else that can be used for a different purpose. For example the creators of the World Wide Web may have seen the Internet as a way to share data between computers, to view remote harddrives, to run processes with well-defined output formats accross the Globe through a common Get/Post language and methodology. Other people saw the Internet and they said "global gambling network" or "global porn network" or "easy way to share medical imaging accross hospitals".

We create these things and there is no way to know what their futures are.



Site Map Classes

Site Map Classes will allow the charting out of the various aspects of a site. Pages are presented with a logic mapping and then the controls are automatically spawned onto the page to allow for easy navigation between the various pages of a site. For example it might be that a site is defided up into Mueseums and in each mueseum there are galleries.

Or it might be that a page is generated due to various querries.

Site Map Classes will handle the ability to edit and modify a site, as well as to access the modified data. there are states of these classes for editing purposes, and also some for just viewing or querrying.



Magic Class

The session class is one that allows for the easy creation of sessions. And using this if I make a session contain an object that has all the data I need for the user as the user progresses through the page. This is a magic class, in a sense. If the object does not exist the session makes it.

The Magic Class may also contained required data objects. Required data objects are objects that manage fields of data for use in creating the pages that user will want to see.

Required Data Classes.

Required data classes will know how to make the pages spawn form requests for the data that is necessary for the user to do the things that the user needs to do as defined by the values in the class.

This design is, in a sense, an input manager class.


For each required data element in a required data class continuously ask for the data until reasonable data is presented.


Page Navigation Manager

The page navigation manager will use the Required Data Classes, and all of the site map classes



Generating File lists for File Processing and automatic scripting.


Abstract:

The images collected by a photoer (digital photographer) are often very large. And thus the storage of these must be accomplished with a scheme to reusability or soon archieving becomes a nightmare. In order to market our collected images we need a way to create an easy way to view and request an image for use later.




Consolidating desparite files.


When we are using our systems to be creative we don't always put ourselves into the frame of mind that allows us to collect and sort our all of the stuff that we create. And so we do a 'save' and type in a filename. Maybe we are into it, so we are just blowing steam by doing the automatic typing thing.


And so after a bunch of days or years of this we now have a huge amount of files and packages that we have collected over the years. We get to a point where consolidation or inventory is necessary.

First we need to do some file finding.


File Finding.


Most systems have a file finding utility. File finding is an expensive facility, so a lot of design have the finding caching a list of what it finds. And these systems are also designed that if the list is destroyed (deleted) then the system will recreate it. For example gnome creates a thumbnails directory somewhere up inside of .gnome2 from your home directory. In this directory a whole lot of files exist that are the thumbnails of all of the images that have been viewed with the tools that use this interface. And so if you run gthumb, a very convenient gnome image viewer, everything that you looked at with the list window as images setting, will appear as thumbnail png files. And so the first time you view these the files gthumb scales the pictures by creating the smaller sized image that is needed for the display. There is often a visible lag when this happens as gthumb anticiapates which pictures will be shown and then scales the ones that the algorithm returns as being next for viewing. Observation of a slow system indicates to me that the anticapatory algorithm used by the viewer window predicts that the next picture would be the one that is later in the list. And so when one cycles through all the pictures in full screen mode on a slow machine the system displays the image down the list quickly as one moves down the list. But if one changes direction the images display slowly.

These observation belie what must be happening in the software.


With command line Linux in a bash shell we use the find utility. And this will create a text list of files that we can use. We also can use the -exec of this facility to execute commands for each found file. As well the xargs facility may be necessary if there is a lot of processing needed. This is because the find facility often doesn't allows for very large buffering of input.1

And so using the find facility I can, for example, find all of the files in a directory structure with a certain extension. In this case let's say the extension of open office documents, sxw.


[bperil@bluesky billbin]$ find ./ -name "*.sxw"

./dvdinfo.sxw

./libertywhereareyou.sxw

./phoneNumbers.sxw

./discourseonCulture.sxw

./CabinetMaker.sxw

./easycopyonweb.sxw

./exportedjobs2.sxw

./exportedjobs.sxw

./interstingcharacters.sxw

./ironagekingsandprophets.sxw

./landdeeds.sxw

./motsim.sxw

./outfi.sxw

./sendmaistartupslow.sxw

./spredsheetques.sxw

./swrambles.sxw

./zipleads_invoices_and_expences.sxw

./zipleads_invoices.sxw

./ideasas.sxw

./infiniteTV.sxw

./TheOther.sxw

./reg_expression_.sxw

./websitenav.sxw

./phprecompile.sxw

./mysqlinfo.sxw

./DocProject_old/Install_Diary.sxw

./DocProject_old/sambasetup.sxw

./DocProject_old/chargenwhy.sxw

./DocProject_old/dbookfind.sxw

./DocProject_old/datagatherprocess.sxw

./DocProject_old/Zipdirect.sxw

./DocProject_old/work to do.sxw

./DocProject_old/rpmupdates.sxw

./DocProject_old/resubmission.sxw

./DocProject_old/automaticRPMupdates.sxw

./DocProject_old/timeserer.sxw

./DocProject_old/telephone_mods.sxw

./mindcontrol.sxw

./unexpected tyrannies.sxw

[bperil@bluesky billbin]$


What would be a correct way to use this information? The -exec command lets us run a command on the found file. For example we could delete all of these files. We won't do that now as one of the files that were found was the very one that I am using to store my current work. If I deleted it, then it would not exisit of disc anymore. This could have unexpected results for the word processing program that I am using.


A non-intrusive use of find is coupling it with grep to create a recursive greb that works correctly. For example if I search for a token in a body of C Programming Language code, then I always use a find/grep hybird. Often I will create an alias or a script that will let me do this. I believe that I have documented this before. How would I search my harddrive to find a file that might already have a write up on how to use the find/grep hybird?

Here is a command line that will prove difficult for the system to handle:


find . -name "*" -exec grep "grep" {} \; > outfi


Seems simple and harmless, however this command will produce a giant file that may just fill up your harddrive.


Ostensively we image that after a time, if everything worked, then the throw away file outfi will have useful information in it that will direct me to my old work on grep. But maybe not because I may not have looked from the correct root where I stored this data. Only by viewing the output will I know.

A very costly mistake would be to not look at the size of the file that is created. The command that I issued above created a huge file with a lot of useless information in it! And this is exactly why one needs to not just willy nilly use find to write data to the disk! If I hadn't escaped from the above command then I wouldn't have known about the data flood.


[bperil@bluesky billbin]$ ls outfi

outfi

[bperil@bluesky billbin]$ ls -otr outfi

-rw-rw-r-- 1 bperil 353829516 Oct 8 09:27 outfi

[bperil@bluesky billbin]$ rm outfi

[bperil@bluesky billbin]$ find . -name "*" -exec grep "grep" {} \; >../outfi

grep: ./KDEBOOK/KDE20Development-html/graphics: No such file or directory

grep: ./webdatabasebook/winestore.data: Permission denied

grep: ./webdatabasebook/wda4.2/wda.4.2.zip: Permission denied

[bperil@bluesky billbin]$ cd ..

[bperil@bluesky bperil]$ ls -otr outfi

-rw-rw-r-- 1 bperil 33420 Oct 8 09:28 outfi

[bperil@bluesky bperil]$



Why did this happen? Because the file that I wrote became a file that was searched. This created a recursive situation. And so when the find found the file outfi, it found grep in it and kept writing to itself. Not a good situation, but an interesting effect. Writing the file to a location that is not in the find path meant the the outfi file would only grow to a reasonable size and not became enormous. The lesson learned here is that unexpected and unanticapated effects can occur when doing recursive finding and greping. The filling up of storage space is an effect that needs to be prevented or dealt with. In this case deleting the ovending file is the simple solution. The novice user might not consider to do an ls -otr to find the sizes of things created. But soon the novice would see all of the harddrive space gone and things wouldn't be working. The system will most likely give a lot of various errors. But this will be an unexpected situation and the novice, if not aware of the effects, may be unable to proceed at this point. But knowledge gives them the skills to do the delete, and then they carry on.


The output from my find didn't tell me line numbers or filenames. Here is a modified version that does:




Here is a find command that is useful for listing where a token is found:

find . -name "*" -exec grep "grep" -nH {} \;


the -nH is key to getting the data in a format that shows us just where things are. This is very useful for doing programming.


For example


find. -name “*.c” -exec grep “include” -nH {} \;


Makes a big long list of all of the times the word include is found in files with a .c extension. Here is a fragment of the (very large) output:


./billbin/distribution/shell_dat/src/othershell/shell2/shell2_display.c:25:#include "shell2_display.h"

./billbin/distribution/shell_dat/src/othershell/shell2/shell2_display.c:30:#include "shell2.h"

./billbin/distribution/shell_dat/src/othershell/shell2/shell2_display.c:31:#include "shell2_str_tables.h"

./billbin/distribution/shell_dat/src/othershell/shell2/shell2_display.c:32:#include <stdio.h>


It should be obvious that the way that the output is formatted will help greatly in what can and can not be down with it. Also remember that shell commands don't work consistently from shell to shell and install to install. Most modern Linux shells are using bash, and things will work in them for you the way they work for me. However, I have never found that command line find/grep is ever the same from machine to machine. But in any case, it takes just a little tweaking to get utilities that are useful and the facility to generate scripts that do very useful things.


And so now to make a script that will scale images.




When using find for advanced processing the prudent will not just fire off the some command line find and then do some expensive processing, like finding all of the files with a specific extension and transforming a clone of them like with a graphics file transformation. The prudent will craft a script that can thus be run if it proves to be OK. This is essentially the equivalent of running the find as a dry run.


But the further thing about a script is that it can be run later by some other process, say in the middle of the night when no one is using the machine when allegedly no one is using the machine from a terminal. And so for transforming a library of images, we could write scripts that write scripts. And then run the created scripts from a cron job.


File Consolidation


It is useful for purposes of inventorying and backup to have a system of directories that are bins for varioius things.

For example on a Linux box there is always a usr/bin directory in which there will be a lot of executable files that are available for use by the user. There are also other such bins and they become obvious when using the find command with an ls statement that shows only thise files that are executables.


The following command will find all files that are executable by every on2e:


find . -perm -007 -print.


When run from root we see that there are very obvious bins of files. Some of these are for shared libraries. Some are for collections of executables.


If you run this, you notice how long all of this takes. Any time you are doing finds it is always useful to do the following:


ps -A


To show you if there are still some finds churning on in the background that you thought you were done with. If you see this, then you may want to use a kill command to stop these processes.


And so we see that methodology of storing useful stuff in a file system is to have a logical place to put them. For example web pages are typically stored at a known place on a machine. The server is then able to say only serve pages from here and from nowhere else. And thus even if you set up a soft link to a place outside of this, content or files anywhere else will not be available to the server. This prevents unauthorized access of user and system files by the httpd or other server. The locations of these bins is different for different systems. For example Apache for Redhat is called httpd. The bin is at /var/www/ .

This is specific to the version of Redhat that I run and is not the same on other systems. It is also possible to open up directories in other places on a system for use of the server in question. Read the server man pages or study the many learning materials available.


In any case, it is useful to have a plan on how to store your stuff. In the case of image collections it may be impractical to keep all of the images in your protfolio on a single computer. In a single day of photoing one can collect gigabytes of images. In a single day of shooting a digital movie the amo9unt of data must be huge. And the task of the content provider is to make all of this wealth of data available in a timely way to provide it in such a way that it is useful for a lot of different things and not just a artifact of a past day but a useful data forest with a rich set of a functionality.


This is where the idea of scaling all of the pictures enters into the fray.


Just for my convenience I create a directory in my local area that I call CamPics. Further I place this below the Desktop. I have created a naming scheme for the directories that I place within this directory. If pictures are downloaded on a day, then they are placed in a directory that is named after the day. When I cache these pictures to a CD, I can then delete them from the harddrive.

Also I have an external USB harddrive that I use to backup the pictures.


Here is the simple scheme of what I do to take download the pictures:


The laptops that have touchpads present a special problem for rehibilitation after the 'dominent' operating system gives up.


I loaded RH9 and had a major problem which I eventually solved. Not easy. So then I loaded Fedora Core 2, and the touch padd is all whacky again with no way to fix it.


I tried loading a touch screen driver but it needed to be built. As I hadn't loaded compilers, etc, to keep the install small, I was unable to build until I did this.


But the real solution for this Linux problem is to determine what I really want this for: To store my digital pictures when I am out photoing on a buetiful Fall day (or any day).


I do this day-tripping thing with my camera.


I use the notebook to cache the pictures.


The solution for my whacky mouse pad:

The easy way is to boot to a run-level that just gives a command line. Then the

sudo mount -t auto /dev/sda1 /mnt/minolta_cam

command should work from this command line.


Then I create a new directory, cd to that, and then copy (with a persistence flag!) from the USB device (the camera) to the directory where I am as follows:


cp -p /mnt/minolta_cam/dcim/* .


The path to the pictures may be different, but it will be on the USB drive of the camera (it's memory card).


The -p is VERY important to preserve time stamps.


/dev/sda1 is the sccsi device to which Linux maps the flash part of the camera. Depending on your use of the USB facilities of the Linux box, the device may actually be different, something like /dev/sdb1 or perhaps /dev/sda2.


Funky until you learn about the way that Linux maps USB as sccsi. The information is cyptically included in the directory /proc/bus/usb in files there.


sda1 means 'sccsi device A-1'


These mappings should be familiar to Linux users as mirroring the way that file systems deal with harddrive devices mapping them as

/dev/hda1 or /dev/hda2


the sda devices map in the same way.


So, the notebook can be a very handy (and cheep) storage device for digital pictures for an software geek like myself.


I think that when I get to it I will not use the laptop anymore for this, but instead figure out a way to use VIO or such device that has a much larger harddrive than my notebook does.


The dump thing about these hand held media players is that they don't let you mount another USB device from them to do file transfers. They will act just like external USB storage devices just like the camera does. However, they can't be used to control that other device. Pretty dumb for these. Makes them very weak as far as I can see.


If you run the notebook to a runlevel where Xwindows is not yet running, it will be much zipper to load the pictures from the camera.


When I am done loading the pictures (which usually takes about 10 minuetes for a 125Meg Flash card)

I then issue the following command to delete the pictures:


sudo rm /mnt/minolta_cam/dcim/*


This deletes everything on the card right away so that I don't have to wait the long time the camera takes to do the same thing.


As I am doing commandline things I then umount the camera as follows:


sudo umount /dev/sda1

or even

sudo umount /mnt/minolta_cam


If you have a process with its current directory in a directory in the cameras file structure, the umount will not work. Then just type

cd

which will put you in your home (and not in the camera's file system).

Then do the umount.


Then I shut down as follows:


sudo /sbin/shutdown -h now



The description above was for the pictures being put on a notebook computer.



I then load them to a harddrive and transfer them over to my other machine. I do it this way because there are problems with the way that my low powered notebook machine functions. Beter technology would make things easier.


After I have created this huge data collectoin of pictures I then use gthumbs to view it. However, when the pictures of a certain directory are removed from the system and archieved onto a CD or DVD or other external storage device, How can I provide thumbnails that are useful for viewing the pictures later?


Creating thumbnails and storing these is the obvious solution. For example if one scales a 1.5Meg picture down to 10 % of it's original size one gets a very pretty picture that is useful for a webpage. Here is a print out of what I am talking about:




Notice that I have a large amount of pictures displayed at once. I generate this with simple php scripts. But they work only when the pictures are available. I don't have php scripts themselves generate the scaled images, they merely display them.

There is no reason why Php or Perl can't be used to display these pictures. This might be a very good way to generate the thumbnails. But essentially making command line scripts does the same thing. And if we create dynamic scripts in php to generate thumbnails everytime we want to see them, then we will be churning through a lot processor cycles. Not only that but we will not be able to create thumbnail distribution Cds/DVDs that cache the thumbnails and provide static html formated views of them.

The php can be used to generate the html that we store. But at some point an image must exist for the ISO image creation that is needed before burning a CD/DVD.


Conclusions:


The creation of thumbnail collections of images is accomplished with simple open source tools. Using these tools we crafted a distribution CD that allows us to have a master index/viewer for the thumbnails of the larger images. Using this we can select and catelog what is available in the larger database.





In order to scale and convert the jpeg images that I have into ones useful for a web site I can use the following commandline:


Jpegtopnm infile | pnmscale 0.1 | pnmtopng >outfi


There is this chain of things to do that are command line utilities. There may be other's as well. For each file that I want to use, I need to be able to scale it in different formats.


[bperil@bluesky bperil]$ jpegtopnm /home/bperil/Desktop/CamPics/07_27_2004/pict0038.jpg |pnmscale 0.1 | pnmtopng > calbird.png

jpegtopnm: WRITING PPM FILE

[bperil@bluesky bperil]$ kview calbird.png

[bperil@bluesky bperil]$ sudo cp calbird.png /var/www/html/images/

Password:


If I want to do these conversions I need to have a way to batch this all so that I can do it effciently. I should have a bin of the pictures that I want to convert and then provide this for the web designer. The designer is able to use the pictures and convert them seemlessly.

However, there is, of course, work involved in getting this kind of seemless use. Right now I know how to convert these pictures on a command line. I need to write scripts to do this, and perhaps provide a front end for running these.

After I do the conversion I have to copy it over every time to the bin in question. This seems like an non-optimal way to do things. It woud be good if I had a file viewer that would let me add files to a list that would then be fed into a script. Each element will be run one at a time as a shell command. The best thing would be that the gthumbs could be extended to allow for the conversion of the images and the placement of these in locations that do not conflict with the raw picture data.


The gthumbs interface has he capacity to create a 'catelog' . This is a way to collect files. These are saved in the directory $Home/.gnome2/gthumb/collections

These files are very useful as they have the name of all of the files in the collection. Next one would just need to append a filename to the image, and provide a bin directory for the images.


Probably the best way to do this would be to create scripts into which one would pipe in the name of a file that would have the data delimited in a database like format.


The files look like this:

[bperil@bluesky collections]$ cat dudecat.gqv

"/home/bperil/Desktop/CamPics/09_23_B_2004/pict0098.jpg"

"/home/bperil/Desktop/CamPics/09_23_B_2004/pict0010.jpg"

"/home/bperil/Desktop/CamPics/09_23_B_2004/pict0111.jpg"

"/home/bperil/Desktop/CamPics/09_23_B_2004/pict0118.jpg"

"/home/bperil/Desktop/CamPics/09_23_B_2004/pict0115.jpg"

"/home/bperil/Desktop/CamPics/09_23_B_2004/pict0098.jpg"


That could very easily be converted as input to a script.


Or maybe I could figure out how to extend gthumbs to allow for the scripts that I write to be integrated into the gthumbs interface.


The interface that I want will have various output formats that will allow it to be versitile. Here is what is needed:


1. A way to collect all images of interest and have them be grouped in logical 'catelogs'. Gthumbs provides this as described above.

  1. A way to label these images for use by a webpage. These could just be the names that we had with a scalling factor applied. These labels would be the output filenames. These would then need to be unique.

  2. A way to create a directory as a bin for the scalled pictures.

  3. Running the a script that will create the scaled images. This script will store the images, using the names that are from the second step.

  4. A further script will then generate an html file that will allow for the images to be displayed in various manners.

      a. as a group of pictures and not comments or text.

      b. Pictures annotated with text

      c. Pictures with different bordering and background

The interface would be thin at first, and then richer later.


Here is a script to make a directory and do some scaling. The file list was generated from the file that I found in the catelogs that Gthumbs provided.


mkdir /home/bperil/billbin/johnweb1

cd /home/bperil/billbin/johnweb1

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0079.jpg |pnmscale 0.1 |pnmtopng > barents0001.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0078.jpg |pnmscale 0.1 |pnmtopng > barents0002.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0077.jpg |pnmscale 0.1 |pnmtopng > barents0003.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0076.jpg |pnmscale 0.1 |pnmtopng > barents0004.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0075.jpg |pnmscale 0.1 |pnmtopng > barents0005.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0074.jpg |pnmscale 0.1 |pnmtopng > barents0006.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0073.jpg |pnmscale 0.1 |pnmtopng > barents0007.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0072.jpg |pnmscale 0.1 |pnmtopng > barents0008.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0071.jpg |pnmscale 0.1 |pnmtopng > barents0009.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0070.jpg |pnmscale 0.1 |pnmtopng > barents0010.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0069.jpg |pnmscale 0.1 |pnmtopng > barents0011.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0068.jpg |pnmscale 0.1 |pnmtopng > barents0012.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0067.jpg |pnmscale 0.1 |pnmtopng > barents0013.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0066.jpg |pnmscale 0.1 |pnmtopng > barents0014.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0065.jpg |pnmscale 0.1 |pnmtopng > barents0015.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0064.jpg |pnmscale 0.1 |pnmtopng > barents0016.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0063.jpg |pnmscale 0.1 |pnmtopng > barents0017.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0062.jpg |pnmscale 0.1 |pnmtopng > barents0018.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0061.jpg |pnmscale 0.1 |pnmtopng > barents0019.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0060.jpg |pnmscale 0.1 |pnmtopng > barents0020.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0059.jpg |pnmscale 0.1 |pnmtopng > barents0021.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0058.jpg |pnmscale 0.1 |pnmtopng > barents0022.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0057.jpg |pnmscale 0.1 |pnmtopng > barents0023.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0056.jpg |pnmscale 0.1 |pnmtopng > barents0024.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0055.jpg |pnmscale 0.1 |pnmtopng > barents0025.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0054.jpg |pnmscale 0.1 |pnmtopng > barents0026.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0053.jpg |pnmscale 0.1 |pnmtopng > barents0027.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0052.jpg |pnmscale 0.1 |pnmtopng > barents0028.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0051.jpg |pnmscale 0.1 |pnmtopng > barents0029.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0050.jpg |pnmscale 0.1 |pnmtopng > barents0030.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0049.jpg |pnmscale 0.1 |pnmtopng > barents0031.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0048.jpg |pnmscale 0.1 |pnmtopng > barents0032.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0047.jpg |pnmscale 0.1 |pnmtopng > barents0033.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0046.jpg |pnmscale 0.1 |pnmtopng > barents0034.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0045.jpg |pnmscale 0.1 |pnmtopng > barents0035.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0044.jpg |pnmscale 0.1 |pnmtopng > barents0036.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0043.jpg |pnmscale 0.1 |pnmtopng > barents0037.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0042.jpg |pnmscale 0.1 |pnmtopng > barents0038.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0041.jpg |pnmscale 0.1 |pnmtopng > barents0039.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0040.jpg |pnmscale 0.1 |pnmtopng > barents0040.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0039.jpg |pnmscale 0.1 |pnmtopng > barents0041.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0038.jpg |pnmscale 0.1 |pnmtopng > barents0042.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0037.jpg |pnmscale 0.1 |pnmtopng > barents0043.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0036.jpg |pnmscale 0.1 |pnmtopng > barents0044.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0035.jpg |pnmscale 0.1 |pnmtopng > barents0045.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0034.jpg |pnmscale 0.1 |pnmtopng > barents0046.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0033.jpg |pnmscale 0.1 |pnmtopng > barents0047.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0032.jpg |pnmscale 0.1 |pnmtopng > barents0048.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0031.jpg |pnmscale 0.1 |pnmtopng > barents0049.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0030.jpg |pnmscale 0.1 |pnmtopng > barents0050.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0029.jpg |pnmscale 0.1 |pnmtopng > barents0051.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0028.jpg |pnmscale 0.1 |pnmtopng > barents0052.png

jpegtopnm /home/bperil/Desktop/CamPics/09_27_2004/pict0027.jpg |pnmscale 0.1 |pnmtopng > barents0053.png



Now that I have this I should be able to quickly generate an index. Or perhaps write code that would look in the directory read all the files, and then generate an output html file. Sounds like a php script would do this well.


Oh, well, more work for another day.

Application Description


A helper for developing formats and prototypes for documents that allows category of inquirey to be entered, manipulated, and stored. Templates of new document types can be entered. Will allow for leveraging of exisiting tools.


Should output standard formats including OO formats as xml.


Should allow for users to store and view the various types of formats, as well as all documents that are created with these formats. Should allow users to serve documents based upon the saved documents to users attached to a server.


Users obviously need to be managed through accounts.



There are a lot of packages already available to do these things. I don't know specifically what people are using. However, I know that the paradigm that I describe here is active and living elsewhere. Obviously I should investigate what already is used by others.



Describtion of the situation:


I would like to have a full-up Xwindows box that will allow me to take my photos on the road. I would like this box to have packages that are compiled, but I would like to have these packages built elsewhere.


Question: What do I need to do to accomplish this?


Answer:

Obviously the packages and the sources and the compiler must live somewhere. Any Linux box with gcc and associatated tools installed can, of course, compile any package for any other machine that is supported by gcc. The problem then becomes one of configuration. The sources in question must be built against code that is for the target box and not necessarily for the box on which the code is being compiled. One must determine how to set compile flags, include paths to image what will be on the target and not on the development machine.


The details of such an implementation need to be worked out.


The reason for asking the question is as follows: I have an older notebook computer on which Windows gave up the ghost some months ago (November 2003). And so, in lieu of this worthless lug laying around, I decided to install Linux on this box. Unfortunately the touch screen driver does not play nice with my install.

And so I discovered a package for this touch pad driver available as source and wanted to compile and install it on my notebook. I tried and then was very quickly reminded that I never installed gcc or other development machines on this box.

And so I resurected my Fedora Core2 discs (which I had installed in August of this year) and installed the tools needed onto the notebook.

I would rather not have because these take up a lot of room on the harddrive. I want a lot of free space on this small (6 gig) harddrive because I want to chache pictures to it while I am on the road. During my travels I collect a lot of pictures as a matter of course in a single day. I need to download these so that I can reuse the memory card for the camera.


Other solutions:


As the main reason that I want to use the notebook in question is to store pictures I could run Linux to a runlevel that does not bring up Xwindows. Doing this would only give me a command line prompt, and that would be fine for mounting the external harddrive and the camera and storing the pictures in the way that I would like. This would save me a lot of time in the download.


Another alternative would be to get a card reader for the format that I am interested in and using that card reader to allow for a quick swich of the camera's memory card. And then I swap in a different card.


Another alternative solution would be to have a camera phone so that when I take a picture a wireless connection transfer the camera to my webserver without me having to do anything other than view the sucker later.


Another alternative would be to have a camera that is attached to a harddrive that has a ton of space on it already in order to obsolete the whole idea of having to transfer the images in the first place.


And so we see that there are many alternatives to our problem. Sometimes we learn things only so much to determine that doing things the way that we could isn't worth keeping doing the things that we did in the same way as before.


Clearly the most elegant solution would be to kick up the storage in the camera. Also to have a device that connects seemlessly to our camera makes things very desirable.



  1. Set up a server for my use.

  2. Set up Blog on this.

  3. Set up to display pictures for sale on internet

  4. Set up to allow for billing clients

  5. Set up to display the pages of advertisers and get paid to do it for them.

  6. Allow for easy input of my content into this system.

  7. Determine strategy of backup.

  8. Determine strategy of upgrade.

  9. Determine strategy of maintenance of content.


Ideas for Weblogs/Blogs

  1. Engineering questions.

  2. Engineering interface with management questions.

  3. Forums on engineering and the state of engineering.

  4. Political discussions of how to improve the system.

  5. Avoid flame-bait type discussions.



Section (paid) that shows layouts of designs for use in systems. This section will be an ebook of sorts that will include all different types of designs that illustrate the real way to do real things. This would include code and working modules. This section will be available to paying working engineers who have a need to know these things. IE: There is the idea that some sections would be classified to the users in particular fields.


Prior Use:


A database of prior or first use. This section would document features and indicate how they evolved and how they were first implemented. Anything that is suddenly identified as being an entity can be added and the evolution of this entity can be documented. This would then be a paid access database.


Policy of ownership:

Anyone who logs into my Blog will thus own their input. They will retain their copyright, however I do not have to keep it for them.


Limiting bandwidth. Server must limit the bandwidth of input from users to avoid overrun. If a user wants more bandwidth the user must pay for this.





Remote Control of a Computer System

A wireless keyboard is, in essence, what I want. However, what I really want is programablity, portability, and protection from bad connections. A wireless keyboard connection is awesome, however it doesn't provide any alternative user output. Here is the barebones of what I would need:

  1. a device that has an ssh shell

  2. device must have internet connection

  3. device must be small, the size of a remote control or phone.

  4. Device must be able to connect to the IP address and send a header which will be a request to do something, ie show a picture, compile a program, copy a file or whatever.

  5. Connection to the controled device must, of course, have a firewall. Users of the interface

As this is all very simple, in a lot of ways an internet enabled phone is already a remote control device. One needs a server on the controlled machine that accepts headers and thus allows control through this interface.


A hand held device that is already a remote control may be avaialable. This device would most likely have some kind of wireless interface that does not require a wireless ethernet connection. Does Creative ship such a device with it's high end sound card?


It would be good if the remote control is compatable with other remote controls. And it would be good if such a device was user programable.


Remote control through PDA or through an Internet enabled phone would be through some kind of a server. And this server would need to run on the controled machine.


Task: Investigate and learn about what is already available in this area. Create a presentation that explains the options.


Naturally an ssh or Xwindows interface would be the best, as the control aspects are built in. If an Xwindows client will run and work on the controled system one can ssh over to that machine and then run the Xwindows control app through the connection. The system will then be controled remotely.


The design of the system should include the server daemon on the controled machine. This is to allow for a logical single thread of control that is a requirement in such a system if logical and controlled application of process is to be maintained even under the attempt to have multiple connections. If we do not have a server than attempts to have multiple connections may conflict with the local machine.


Fortunately most applications that have the effect of control have such interfaces already. As SSH and Xwindows work very well together. And thus there is little more than one need do except for writing the applications.


If there is not a server for the application then the individual control program written in Xwindows protocol and run from a command line from a shell on a remote Xterm window. The application would need a way to test if there is already an instance of this application running locally. Having a server allows a way to control access without limiting the number of users. Users would need to contend with other users, and the server is the place where the users are queued and allowed to logically control the controlled device in whatever manner makes sense as per the design of the system.


First a trivial example is discussed and then a more complex example.

Trivial Example: Channel Control for a television system.

We have all seen the episode of whatever situation comedy where there are two or more people in a room and they each have their own remote control. And then they battle with each other over what station they want to watch. The conflict is amusing to us as the conflict provide the expectations that the comedians meet that we expect that we find as funny.

Can anything bad happen in this situation? Nothing that makes me want to warn the line crew to danger.

Non-trivial Example:

A Control application for a robot written to work in Xwindows (anything in a Linux GUI) will load and run and control a remote robot without having to do anything more than open an ssh shell, rlogin, and run the sucker. If there rae compeating Xwindows applications running that both want to control the robotics, then how does the system know which of the servers ought to be allowed to control the robot?


If the system design does not account for this possibility than unexpected results may cause unexpected behaviour. Industrial Robotics mandates the ellimination of unexpected behaviour as a safety requirement. Never should a robot perform motion that is not totally interlocked and safe. This strong requirement mandates the design of strong systems. And thus if one does create an interface that doesn't account for the possible in a real world situation, then one mustdiscover this possible thing at the point of it becoming a problem.


Scenario and a Question:

From a 'theory' of engineering point of view a scenario and a question: Suppose an engineer who understands the problems and accounts for them so that these problems never actually materialize, and then sends a big bill to do this. How does this engineer justify his cost when none of the things that he imagined ever happen? Does management know what this engineer has accomplished?



(insert forum here)


Remote Control, A review of what I find on the web.



Linux CD burning is a simple matter. I want to make it simpler. XCDroast wraps the following utilities up into a graphical package:


cdrecord, cdda2wav, readcd and mkisofs


I would like the ability to have a little more ease of use.


When group files xcdroast allows for creating files that look like this:


[bperil@bluesky cdlists]$ cat campics14.lst

#

# X-CD-Roast 0.98alpha13 - Master-Paths

# created: Thu Nov 18 19:34:35 2004

# by: root@bluesky.localdomain

#

ADD2 = "/home/bperil/billbin/CamPics/08_20_2004","/08_20_2004/"

ADD2 = "/home/bperil/billbin/CamPics/08_20_B_2004","/08_20_B_2004/"

ADD2 = "/home/bperil/billbin/CamPics/08_22_2004","/08_22_2004/"

ADD2 = "/home/bperil/billbin/CamPics/08_22_B_2004","/08_22_B_2004/"


Utilitzing this I could create or craft the files in another way. But then I would really want to know what the command lines for mkisofs are like.


Here is a command that ought to make an iso image for me:


sudo mkisofs -o track-02.img -graft-points \

/08_20_2004/=/home/bperil/billbin/CamPics/08_20_2004/ \

/08_20_B_2004/=/home/bperil/billbin/CamPics/08_20_B_2004/ \

/08_22_2004/=/home/bperil/billbin/CamPics/08_22_2004/ \

/08_22_B_2004/=/home/bperil/billbin/CamPics/08_22_B_2004/



I am running this and I am not sure how it will come out.


It makes a big file. I want to specify an explicit output path or the file will show up where I run the command. I don't know how big this will get. Let me try a few more files.


I want to use some of the other command line options. The mkisofs man page talks about the .mkisofsrc file. I should look to this and see if it exists. I did not find this file but I did find the files that xdroast creates. They are under root which makes sense because root is the user that is allowed to burn CD's.


I can create my own .mkisofsrc file and put it into a directory that I choose. Currently I am using the directory /usr/cdimages.


If I was going to make scripts that do these things I would probably have everything be put onto the command line except for things that were not to change. The volume name should probably be different every time.

When doing the image creation we needa lootf hard drive space for storage. If disk space issparce one will not be able to save all of the ISO images to disk and then burn disks one at a time. One will thus need to create an image and then burn it and then maybe delete some other images. There is a level of accounting that you need to do. If you are in the business of creating a lot of CD's you may need to figure out a way to have these images available. I suppose that there are CD burners that do just that.


I have created images for disks 15 -29 of my CD's. I will burn them all tomorrow. I did not use any scripts, but XCDroast.




Nov 20, 2004


I have completed my archieve of all of the camera pictures. I am trying now to learn how to burn a DVD which would make archieving a lot simpler.


Here is an interesting command:


dmesg


and it gives alot of useful information. Running this was suggested at http://www.linuxworld.com.au/index.php/id;286528247;fp;2;fpid;37

which talks about burning dvds.


Another site suggested that I download a binary that would allow me to run cdrecord addone cdrecord-prodvd.

And I got this and put it where the site said to put it. The site said I wold get a message that I had this and needed a key. I don't believe that I got this.

I also turned on logging, and now there is a log file in the .xcdroast directory under root. This gives us information on command lines, etc. And it might show me other things too.



Finally after a lot doing I have gotten the DVD burner to work. Here are some tips:


1. you need a key. You may need to pay a license for a key after 5 April 2005. Or you have to keep updating the key with new versions.

2. The 'a' or 'b' versions of the system will not work after one year from their build date. So if you need to run these later you will need to turn your clock back. Or just download a new version of the tool.

3. The chmod to make the file executable is necessary.

4. Simulation mode does not always work for all devices.

5. DVD+RW doesn't seem to mind if you try to write a bunch of times. Eventually things will work.

6. Look in the /root/.xcdroast/xcdroast.log and /var/log/messages for some interesting information if things are not working.

7. XCDRoast starts up to burn in simulation mode. And yet this will not work with some types of burning. Or, you will think you burned and there won't be anything because you were simulating. It might be dumb, but I had this problem! I thought I had burned an image but I was just simulating. And for this reason it really helps to realize that XCDroast is just a wrapper. And this wrapper calls the real tools that do the real work. If you are having problems and need to debug you have to read the outputs, the logs, and the messages in /var/log. If you have a problem you probably want to save outputs. It would help if these were saved automatically. But then they would need to be purged.


It is very cool that I will have the ability to use the DVD burner to back up the large amount of picture files that I have. I am excited to see the data when the device finishes writing.

I am very pleased that I now have my data in this large scale format. I can set up a slide show from a single DVD and it is very cool. I can now send a lot of data to someone in a format that is under my own control.


Creating Audio CD's

The next step is to learn how to make audio CD's. I think that I know what I did wrong the last time I tried: I didn't ask the system to let me see single files.

I will try now.

What I did not understand before is that when the XCDroast told me to go to the write window that the write window only gives access to 'tracks' that are in one of the configured directories. A wave file will be treated as a single track. But the write will only find it if the *.wav is in a directory that is set in in setup windows. This seems obvious once you figure it out. But it is not very clear at first. Now I see it and I am burning a music cd of some things that I created about two or three years ago. These were made with Cakewalk on windows.


Recording Audio

All of the *.wav files that I have were created on Windows with Cake walk or Creative software using sound blaster or other wav capture on that system. It is not a trivial matter to make good audio wave files. I believe that the standard format for waves does not have enough dymanic range and is not sampled at a fast enough frequency. Instead of 16 bit at 42 thousand samples a second I would choose instead 24 bit at 92 thousand samples a second. Why? Because of the way that human ear hears sound.


Now that I am able to make the audio CD's I need to get the Linux boxes to record as well.


An important aspect of doing digital recording is to get the levels correct. If you do not then you can have very bad effects as aliasing creates the sound of someone banging on a reverb or worse. And for that reason due to this I have compressor/limiters hooked up between my mixing board and the computer. And thus I always get levels that are not too low and not too high. If you want quality recordings you have to do this.

Also, if you can for things that you want to retain quality so that it can be amplified and not sound like tinny, try to use as fast a sampling rate as you can. I really stress that you will notice the difference when you use 24 bits/sample and 94k samples/second.


I tried the gnome-sound-recorder and it did not work for me. I don't know why.


Here is what the /var/log/messages showed me:

Nov 20 22:43:28 bluesky modprobe: modprobe: Can't locate module sound-slot-1

Nov 20 22:43:28 bluesky modprobe: modprobe: Can't locate module sound-service-1-0

Nov 20 22:43:28 bluesky modprobe: modprobe: Can't locate module sound-slot-1

Nov 20 22:43:28 bluesky modprobe: modprobe: Can't locate module sound-service-1-0


If I run the audio recorder as root will it then work for me?

How does one transfer styles in Ooffice? Is it creating template documents? Is it copying the XML style definitions into a new document?




Above shows a screen shot of the Format/Styles/Load selection from the Open Office menu.


After that we get a dialog box that allows the opening of a file from which to load the style by selecting the From file button.




From looking at the selection box for loading a stye we see a lot more options than just that. We also see 'Templates'. Shouldn't that be the way that we load our styles?


Templates


The template feature is accessed through the same window, however the template must be added to the configuration by using the Style Organizer. To get there one must navigate through the catalog, at the same place in the menu that one accesses the Style Load facility. The following window will appear:



From that select the Organizer to get to here:






Notice that we select to import a template. This is from a file, one that we saved previously as a template. It is wise to have a common location for these so that they can be saved and backed up logically. After we load the file of interest, by navigating to the directory where it is located, then we can add it to our Style Organizer and it is now available as a template in the Load Styles dialog box. Compare the picture of the Load Style dialog from above to the one below. Notice that a template style now exists.




We can now use this to load styles from this template without having to navigate back to that file. However, as noted before, if we move this we probably break the link, so it is important to know what the templates that we have are and how the system really stores this data. There is most likely a file that has this list. From this list we can craft a script to copy all templates loaded into a system. This would be useful if someone had not put all templates in the same location. I am sure if I need to I can find this by looking for it.

How would I do this?


The Open Office interface provides a lot of good help. Use the help system when you have questions about functionality.



Pulling together the far flung files


Please note that if you are doing a writeup about moving files around, and one of the files that you are moving is the one in which you are documenting your actions, you do face the possibility of moving the file that you are using. So don't do this.

Organization is a requirement. However, it is not a good idea to just put everything in the same place. The following command will give me a good list of all of the files of pattern “*.sx*” :


[]$ tree -P "*.sx*" -f -i | grep sx


I run this from my home directory and get a long list of files. Here is a fragment of that list:


./CameraPicdownloading.sxw

./Desktop/AmilliaPub/Democray.sxw

./Desktop/AmilliaPub/DesignDocs/APC1_0.sxw

./Desktop/AmilliaPub/DesignDocs/Qtfirsttime_log.sxw

./Desktop/AmilliaPub/DesignDocs/SoftwareNotes.sxw

./Desktop/AmilliaPub/DesignDocs/Trjectory.sxw

.

.

. and so on for three or four pages.


It was a three page list in the above font interesting to me at the time, but not to be printed forever in a book or stored in this file. Clearly I have been lax in keeping track of all of the different files and I have been inconsistent in my inventorying. Perhaps I could do the same kind of thing with these files, sort them by date, and then move them into directories based upon dates of creation. That way I would always have a backup of what I have done. This is an expensive form of source control. And there are many drawbacks to doing things this way.


Also there may be duplicates that aren't even different from each other. And I was copying things without using the -p flag on the cp command which means that for a long time I was not preserving time codes on my files. Stupid me, the most basic thing that I needed to do and I wasn't doing it. That would have made my life a lot easier.


Supppose I created a script that would copy everything to another directory, preserve the time code, and also rename any duplicates. Then I would have the whole mess all in one place. I would use the above list after having piped it into a file. I would then use a text editor to create command line copies by crafting a script from the output of the tree command. I would do global replaces of the “./” and instead insert “cp /home/bperil/”.

I would add a . at the end. Then I would run this script from a directory that would be a place to have all of the files of that type. Further I could make these readonly. Then I could copy the whole mess to a single dvd (if it isn't too big). And then I would have a hard archieve of what is on this machine with a name of the pattern “*.sx*”. That would be Open Office documents.


After I did this I could do the same for other files of different file extensions.


It is important to realize that this could represent a lot of information. One must investigate the storage requirements of doing this.

It can't be assumed that files are always copied with the -p preservation option set. And so when deciding which is the newer file I will need to open and look at the files. There may be other data in the file that would tell last date modified. Knowing this would help me to determine the relative ages of files.


For any file that is not a duplicate, I ought to just take the original and put it into a reasaonable place. This might be leaving it where it is, or it might be moving it to a different location. The idea of having the archieve is just for backup purposes. Having all of the files in one place makes moving what is part of a logical grouping of files a lot harder. Also it increases storage requirements. It is best to have a bunch of directories.


Assuming that you have a logical directory structure and you have placed all of the files that you want in the place that you need them to be, then creating a mirror of these with directories that are similar roots being together is a little problematic. To do this we need to identify which sub-trees are different copies of each other. We need some kind of sub-tree analysis. The diff command provides this functionality.


Well, sometimes there are even easier ways to do things. And clearly we would like to only have to diff files that might actually be the same. So if we think about it and look at it a little closer, we notice that there is a find tool in the main Kdesktop menu. And so using this we can get a file list for the pattern “*.sx*”. Then sort this by name and we see the duplicates. Here is a picture of this:





Notice that we can select save to file. This probably gives us a similar output to what we had before, only it is sorted by filename.


I saved as an html file, and then opening this file I get a linkable list. This list lets me open all of the files. However, it loads them from the local disk to the cache. So if one were editing these, then one would be edinting the temporary copy of the file. This would result in even more unnecessary copies. But at least it gives us an easy way to get at the files. But we could also do this from the Kfind window anyway. If we do it from there we really are editing the files in question.


Optimization of tools means that we need to understand what all of the tools are. We need to know what we are trying to accomplish and then investigate what is available. Cleaning up the files of a local user should only be done by that user. No one will want their system administrator mucking around in their files. So the best that a back up can do is just to take the whole thing from the home/username as the root of the backup.


In the case of what I need to do I have to inventory what is there, and decide if there are duplicates. I then need to diff the dups and see if they are really different (even if they have different times).

If the times are different then they may still be the same. If the times are the same then one must have been copied from the other (or from the same root from somewhere) with the -p flag set to preserve the time.


Here is what I am going to do. I am going to use the list that is there and move all of the files from directories that I don't want to use as a file bin (such as my /home/bperil directory!) . But I need to do this only for files that don't have duplicates with the same name. So I will go down the list provided in Kfind, select the ones of interest by using the mouse and the shift key, and then move them all a different place. I will put them in billbin, where I like to put all of my files. I will put them in a Doc directory.


OR, just decide that there ought to be none in the home and just do a move command from there to our new directory. But before I did this I closed the word processor which was working on this file. After I ran the following command I had to navigate to the new directory and find the file there. Here is the command that I used:


mv *.sx* billbin/docs


It might seem trivial these ideas to an experienced user. But these are exactly the kinds of things that confuse the novice. I have done this kind of moving and cateloging of my files throughout my career as a software jock. And over time I saw what others did as well. When a user starts by having a logical way to store data, and thinks in terms of packages of files, then long term the user is able to do what he or she needs to do in a easier way. The backup and recovery problem is only a problem when one has not been doing this properly. Often the user is so taken up in doing what is necessary that he or she doesn't do the backup properly.

With new systems having so much storage space, it is often convenient to copy a whole directory structure ot a flash card or other form of backup. But then later on if one copies things back, then there might be a problem. If a user always uses the same structure, and never works on two machines at once (home verses work computers) then the user doesn't have a problem. One uses the copy command with the preservation and update flags, and then the problem is solved. It is only when this rule is not followed that there might be problems with keeping track of updates.


A package is small enough to be reasonably shared

A package is useful enough to be functionally valuble and worth finding, downloading, and installing.

A package represents a group of effort. It might consist of image files for pictures on a web page or for whatever use is needed. It might be a bunch of useful scripts that work together in a logical way. It might be source code for an executable module.


The use os CVS or other source control software is also desirable for anythign that is worked on by more than one person. It would also be useful for the single programmer who wants to track changes in a reasonable and safe way.




Linux sound! What a confusing mess. There are a lot of different things that can run at once and, for example the arTs and the esd might actually be running at the same time. So what do I have working? Nothing. I can play sounds but I can't record them. I need to be able to record my tapes!


So what to do? Learn more about it and keep at it until I have something working. Meanwhile I can't record my tapes!


There is a kernel tuned to make a sound stattion. I got on the wave of installing it, but then I changed my mind.


What was the old card's driver:


via82cxxx_audio



I have disabled the interface for this sound device. On the board that I have, a ASUS A7V8X I am able to do this no problem by going to the BIOS before grub boots. But then I still needed to go into configuration areas in the system and have tell the system to not load the driver, via82cxxx_audio. At that point I was able to finally get an input signal.

The system was very good about having both cards. So this make me think that I can have multiple of the sound cards. The way that the artsd stuff works does not impress me. I want to get the ALSA stuff working and have been to the CCRMA web site and downloaded a lot of their stuff after installing apt. I got a their kernel modes and tried running the ALSA stuff, but that didn't work. I bought a 24 bit sound card and that didn't help me at all as the Linux driver for it is not available. It was $30 bucks so I am sure that they will eventually support the thing. I will keep it. I think that I will plunk it into the old ME box that I have (how embarrassing). ME had a lot of problems which all resurfaced when I reinstalled it after my bought with inoperability.


I am considering getting a harddrive recorder that will be a dedicated system. There were two that I saw, one from Yamaha and the other from Tascam.

Tascam 2988

Yamaha AWG16.


There was also a 3000 dollar Roland machine too.

It would be good to have a cheap as dirt system that would be totally Linux. I was thinking that the CCRMA stuff would be that. But it is very difficult to run a dedicated machine that has all of the problems worked out and gives you all of the tools that you need to make music. With the sound cards there is a lot of complexity. The web is totally full of all different informatin about various Linux sound schemes.

To me it seems that this confusion is partly a product of the creation of music not being a trivial endevour. And people want to protect their turfs in this industry. They sure don't make it easy to get these things working. I know that recording was a done deal about ten years ago. It was as easy as plugging in a microphone. But now it seems with every 'upgrade' of the kernel their is mayhem in the world of sound recording. That is why a dedicated machine is desired.

And then the other machine would be for the internet, programming, games, whatever.


Making music should be easy for someone with musical talent. I don't want to waste my time doing configuration control when I want to sing and play the guitar. I would like to be able to have a dedicated box because music is important to me.


This look into Linux music should have been done a long time ago I supose. But I was using my access to Windows as a crutch. But windows, despite the real money that I have poured into that drain hole all of these years, has always ended bad for me. Maybe it is becasue I have run the machines too long. But I bought 6 different machines over the years and they all eventually were either dead and gone or I had to keep reloading Windows. My Linux seems very stable and hardly ever has a problem.


I think the dedicated harddrive solution, or a Mac, is a good choice. It would also be nice to be abble to record video.



So maybe a Mac.


Yay yay. I have just heard my first digital recording from Linux. It sounded noisey and loud and over modulated, but it was a stereo recording. And now to start dubbing tapes.


I have had a successful recording without a hum. To do this I needed to run a ground wire between the tape player and the computer. Maybe I will hook up a different player later. For now this does very well in recording. I would naturally have to later test the output and see if things sound OK.

The files that are produced are not wav files. And so I am unclear about what I need to do with these in order to get them into a format that will be useful for burning a CD. I want to do this with old tapes that I have laying around in boxes. And so what I will record will not be other people's copyrighted material but my own. Why do I want to do this? The reasons should be obvious.


I think that while I am hot on this I should go grab another sound card or two and drop them in as well so that I have multiple inputs. Things should be OK with that, I imagine. That way I could have three possible stereo inputs at once.


There are actually more because you can get the input from other various devices as well that are available on these sound cards.


I have learned a lot in the last day, though maybe I have been diverted from my other work.

At least I have learned that the stuff isn't totally broken. And the CCRMA stuff still intreagues me, but it seems that if I am running that then I am at the mercy of some lab out in Berkley. I have seen how malicious some of the licensing has been and how outrageous that private people nested within a public university can file so many patents and claim that they invented so many things when it seems to me that what there is is a collective effort.


SoI guess I don't know what to believe. I will like to use the CCRMA stuff if possible, but I would probably take the one that I set it up on off line for the most part and just do sound recording with it. That will only be if I persue this stuff. The hard drive recorder sounds like the better option. Or to get my own sound distro going on.


I should learn more and see where that takes me. Result, after a small investment of time and money I am now able to make audio recordings usingg the aRts system that so many people on the net panned. While I figure out a migration path to CCRMA or ALSA I can still do my recording.


Next step converting the files I get with the KRec to something that I can burn onto a CD.


Clearly KRec and these tools are being abandoned by the Linux community. I would like to read more and see why. I would guess that it techtonic forces in the music industry that play the government like puppets. They probably have scared a lot of the open source audio people into the corners under the auspisce of copyright infringment enforcement. I don't really know the reasons for this bad situation within the world of Linux audio. I guess I should get more involved and read more.


Then I will document what I learn or just file it away.


I have often said that Linux is good for some things and not all. It would be good if I could get it do work in all things that I want to do. While I am making this happen, in the meantime, I will still need to have reliable other ways to do things. I thought htat getting a Microsoft install on a partition woud let me do video recording. This is not the case, it crapped out after a while.


So maybe the dedicated harddrive recorder is the way to go.


I am going to try and run the CCRMA patch kernel and see if I can get Audacity to work.


The bad hum didn't go away until I decided to use a different tape recorder to do the play back. And now the sound in clear and clean. So I am ready to start doing some recording.



Now that I am using the KRec I notice that it has some very cool features. For example at any time while I am recording a session I can add a new file and have it start putting the record into that. And so when they all playback together, they are consecutive. But I can have multiple tracks for the CD.


It actually works fairly well. And I have put the dual tape deck right here with the computer so that I can now transfer my tapes in a more reasonable way.


here are examples of the use of sox:

[bperil@bluesky sounds]$ sox -r 44100 -c 2 -s -w manivet.raw-01.raw track01.wav

[bperil@bluesky sounds]$ sox -r 44100 -c 2 -s -w manivet.raw-02.raw track02.wav

[bperil@bluesky sounds]$ sox -r 44100 -c 2 -s -w manivet.raw-03.raw track03.wav

[bperil@bluesky sounds]$ sox -r 44100 -c 2 -s -w manivet.raw-04.raw track04.wav

[bperil@bluesky sounds]$ sox -r 44100 -c 2 -s -w manivet.raw-05.raw track05.wav

[bperil@bluesky sounds]$ sox -r 44100 -c 2 -s -w manivet.raw-06.raw track06.wav

[bperil@bluesky sounds]$ sox -r 44100 -c 2 -s -w manivet.raw-07.raw track07.wav

[bperil@bluesky sounds]$ sox -r 44100 -c 2 -s -w manivet.raw-08.raw track08.wav

[bperil@bluesky sounds]$ sox -r 44100 -c 2 -s -w manivet.raw-09.raw track09.wav

[bperil@bluesky sounds]$ sox -r 44100 -c 2 -s -w manivet.raw-10.raw track10.wav



1Linux & Unix Shell Programming, David Tansley, Addison-Wesley. Page 30

2P24 ibid.

Copyright 2004, 2005, Amillia Publishing Company Page 77 of 77