Archive

Archive for the ‘MythTV’ Category

Slow MythTV Deletes (Solved!)

November 10th, 2008 1 comment

Less than an hour after I post my latest progress as “Doesn’t make much difference”, I’ve managed to fix the problem (Well, it works for me at least).

The secret was written in a post that I linked to in my very first post about this problem. I can’t believe that I didn’t pay it any attention before.

Basically, all you need to do is to run “mythfilldatabase” without any parameters. As well as retrieving new data, it resets the database.

Now this, actually, makes sense to me. I guess the reason that I never encountered this before is that I always used to use the Radio Times grabber for downloading my programme listings, however the other year I switched to EPG (electronic programme guide). That means that I don’t run mythfilldatabase on a regular schedule.

And therefore, that means that I’m not downloading data, and cleansing the database, as much as I should be! I seem to remember something about a few thousand items being marked for something, as it scrolled up the screen. I subsequently ran it again just to check, and this time it showed zero for all items.

I since tested it, and I was able to delete around 10 different programmes, and still didn’t receive any lag in deletions. That’s going to way in around the 15 to 20GB range. That’s pretty damn good, if you ask me.

All I need to make sure is that mythfilldatabase didn’t cock anything up, and then somehow schedule it to run frequently. More than once a year I guess would be a bonus….

Categories: MythTV Tags: ,

MythTV Slow Deletes (Still)

November 10th, 2008 No comments

Having spent bloody ages explaining, and then doing, the defragmentation of a Linux hard drive partition, I’m still experiencing very slow delete times from my MythTV media centre.

To make matters worse, I now get a time out on a lot of different interactions with the box, for example when I escape out of watching a tv programme. The screen will go black and the two minutes later I see a popup telling me the connection to the master backend has ‘gone away’.

I have now done an optimzation on the database through several methods and it does not appear to make life any better. It does appear to make a difference for a while but it soon returns to normal behaviour. Admittedly though I was planning on doing an ‘OPTIMIZE TABLE’ for every table in the database and so far I have only done it on the ‘program’ table.

One tool I did use was ‘myisamcheck –analyze –check-only-changed –quick –sort-index *.MYI’ in /var/lib/mysql/mythconverg

That seemed to spend sometime checking several tables, including ‘program’.

M planning on running a full optimize when I get the chance, but I’m also wondering what else could be going on which would cause this problem. Might need to consult ‘top’!

Categories: MythTV Tags: ,

Defragging a Linux Partition

October 15th, 2008 No comments

So, further to my previous ramblings about when it is a good idea to defrag a Linux partition, the advice that I saw repeated time and time again, on how to defrag a hard disk, was quite simple to understand.

The basic theory is, is that you do not need to do anything more spectacular than move some suspected fragmented files from one partition onto another, and then back to the original location. Now, this is something that is quite easy to do, and the theory behind it is quite simple as well.

Because doing a file move users the userland commands (like “mv” and “cp”), we will be copying the contents of the file, and not the disk blocks used by the file, and it will then be deleted. Therefore, the file will be copied from partition A to partition B in an unfragmented way – i.e. from the start of the file to the end. Now, remember that the file system will choose a contiguous block of disk space, plus a little extra. This means that as soon as the file is on partition B, it is automatically defragmented for us. Brilliant!

Now, the only problem with this is, that you file is now on a different partition, and therefore in a different directory to before. This therefore means that you have to move the file back to the original partition in order for everything to work. This also has a slight plus, as if there is not much free space on partition B to hold the file in a contiguous block, the filesystem would therefore fragment it (not the desired effect), however, seeing as the file came from partition A in the first place, there should be enough free space on there to hold it in a contiguous block (unless of course partition A is already very full, in which case it could become fragmented again, but in this situation, there is nothing really that you could do in order to defragment anyway).

If you have enough free space, you do not necessarily need two partitions in order to make this work. You could simple copy the files from one directory to another, delete the originals and copy them back again. Note, however, that there is a problem with this procedure if you are not careful. You cannot simply move the files from the original directory to the new directory – it explicitly has to be a copy. As Linux’s ext2 and 3 filesystems are based on a linked list approach, moving a file involves nothing more than moving around a few address pointers on disk (hence why it is a very quick procedure to move files on disk in Linux). The files themselves are not physically moved, and therefore they would not be defragmented. The second “copy” procedure, can be a “move”, as the files would have already been defragmented in the original “copy”.

I should also point out that this procedure also works on a Windows NTFS partition as well. If I could find the link I had, I could point you towards a third-party disk defragmentor for Windows, that does the same thing (and if I remember correctly, I’ve got a feeling I saw Microsoft’s Raymond Chen using it as well.)

So, how do you actually achieve all this, and how did I get on when I did it?. It’s actually quite simple, and I’ll post that later.

Categories: Computers, MythTV Tags: