Keep in mind, once it is in MySQL, searching is done by SQL rather than sequentially reading records. MySQL is very, very fast once you get past opening the database, and if you do the same query over and over, it gets better at it. Parsing the XML is the hard part. Once you've done that, everything is SQL whose whole purpose in life is to find stuff. No question, a purpose-built data store with custom code can always outperform a general purpose solution, *IF* you are willing to put in the effort. But it will take a lot of effort to outrun MySQL. MySQL will read the keys into memory and build some sort of hash to search very quickly. What sort of hash and what sort of search will depend on the data. But MYSQL will make that choice. MySQL databases tend not to be especially expensive in terms of disk space. I would suspect less expensive than XML, which tends to be kind of wordy. I habitually normalize my databases which tends to hinder performance, but even with thousands of records and long keys, searches are essentially instantaneous. Normalizing the database tends to make virtually any query possible, as well as reducing maintenance (which doesn't sound to be an issue in your case). If you are already posting to a PHP script, then stuffing it into the database is as easy as writing the file, maybe easier. Getting it back is an awful lot easier. I often put fairly trivial things into the database simply because it is so flexible. If I want to do some one-off query, I use phpMyAdmin or the command line interface. For fancier stuff, the interface from R into MySQL is pretty good. Sometimes I'll even use R if I just want to look at a quick graph. If I need something routine then it's PHP. Sounds like you may have a lot of data. Might warrant a look at R, although it can be kind of intimidating. But if you want to do any sort of analysis on the data R can be a huge help. Nice part about CentOS is that all this stuff is free. (I'm a Fedora guy myself.) --McD On Sat, 2014-07-05 at 09:52 -0700, Harold Hallikainen wrote: > Thanks for the quick response! The PIC32s are doing an HTTP Post of the > XML to the server. So far, I have 3 or 4 line PHP program that appends th= e > XML to a file and emails it to me so I can see that it's working. I've > started on having that PHP script parse the XML and pass to MySQL, but, a= s > I was setting up the tables in MySql, I got to wondering about just using > a flat file holding the XML. Though the XML is verbose, the file would > still probably be smaller than the MySQL file with its constant field > widths. Constant record size files are certainly easier to search, but it > still seems like a random record size file could be searched pretty > quickly. I'm thinking of a binary search where'd you'd read in double the > maiximum record size. There's guaranteed to be a complete record in there= .. > String search for the openting tag, then parse from there. But, as you > point out, PHP already has support for MySQL, and has functions to parse > the XML (XimpleXML, etc.). I MIGHTbe able to use the PHP XML parsing to > deal with the appended file I'm currently generating, but this is, I > believe, all read into RMA and then dealt ith there. Not super scalable. >=20 > Again, thanks for the ideas! Maybe I' back to MySql. >=20 > Harold >=20 >=20 >=20 >=20 > > Certainly stuffing the data in MySQL gives you great flexibility. It > > shouldn't be a big deal to write a PHP or Perl script to stuff the data > > into the database. This makes reporting a whale of a lot easier, too. > > It sounds as if you are already generating XML files. No reason you > > couldn't just suck the XML from the PIC into the database and eliminate > > the intermediate file. But I presume you have already sorted out the > > messy issues of not overwriting old files, etc. > > > > If you aren't familiar with PHP think of it as sloppy C. It is a littl= e > > easier to get to the database in PHP than C, mostly because PHP is > > interpreted so changes are quicker when you are sorting things out. I > > also believe PHP has a library to make parsing the XML simpler, althoug= h > > I haven't used that myself. > > > > An easier (if not as elegant) solution is to convert the XML to SQL and > > suck it into the database through the command line interface. A little > > bit of bash and you can automate this on the crontab. > > > > --McD > > > > On Sat, 2014-07-05 at 08:25 -0700, Harold Hallikainen wrote: > >> I have a bunch of pic32 systems posting xml to a server. I could have > >> the > >> server parse the xml and save the data in mysql. Another possibility > >> would > >> be to just append new xml to the old in a text file, then access the > >> data > >> in that file to generate reports. I like the simplicity of just saving > >> the > >> xml. I'd like to hear comments on this and suggested xml database > >> engines > >> to search the data. This is running on a centos system. > >> > >> Thanks! > >> > >> Harold > >> > >> > >> > >> -- > >> FCC Rules Updated Daily at http://www.hallikainen.com - Advertising > >> opportunities available! > >> Not sent from an iPhone. > > > > >=20 >=20 --=20 http://www.piclist.com/techref/piclist PIC/SX FAQ & list archive View/change your membership options at http://mailman.mit.edu/mailman/listinfo/piclist .