Searching \ for '[PIC] CVS for PIC development' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/microchip/devices.htm?key=pic
Search entire site for: 'CVS for PIC development'.

Exact match. Not showing close matches.
PICList Thread
'[PIC] CVS for PIC development'
2006\01\13@004331 by Chetan Bhargava

picon face
Hi.

Another way Linux could help in PIC development is CVS (Concurrent
Versions System). Yesterday I installed CVS on my Ubuntu linux server
and am very pleased to see CVS keep versions of my programs. Another
advantage is off site backup as my server is located in a data center.

Before CVS my programs were scattered in various hard drives, usb
drives, etc. now I'm not at all worried about making changes to wrong
(version) file. I also had lot of trouble if I had to work on a
project at work and at home. I had to carry my latest changes into USB
drive and was sometimes stuck when I forgot to bring the usb drive
with me from work!

Regards,

--
Chetan Bhargava
Web: http://www.bhargavaz.net
Blog: http://microz.blogspot.com

2006\01\13@020423 by Wouter van Ooijen

face picon face
> Another way Linux could help in PIC development is CVS (Concurrent
> Versions System).

I have been using cvs for years with only my windows system, so it is
cvs that is helping, not Linux.

Wouter van Ooijen

-- -------------------------------------------
Van Ooijen Technische Informatica: http://www.voti.nl
consultancy, development, PICmicro products
docent Hogeschool van Utrecht: http://www.voti.nl/hvu


2006\01\13@021745 by Shawn Wilton

picon face
Yeah, I used CVS for many years.  I am however quite fond of Subversion
now.  http://subversion.tigris.org/

I would recommend it to anyone getting in to versioning systems and would
recommend it as a replacement for CVS as well.


On 1/12/06, Wouter van Ooijen <spam_OUTwouterTakeThisOuTspamvoti.nl> wrote:
{Quote hidden}

> -

2006\01\13@023631 by Chen Xiao Fan

face
flavicon
face

> -----Original Message-----
> From: .....piclist-bouncesKILLspamspam@spam@mit.edu
> [piclist-bouncesspamKILLspammit.edu] On Behalf Of Wouter van Ooijen
> Sent: Friday, January 13, 2006 3:04 PM
> To: 'Microcontroller discussion list - Public.'
> Subject: RE: [PIC] CVS for PIC development
>
> > Another way Linux could help in PIC development is CVS (Concurrent
> > Versions System).
>
> I have been using cvs for years with only my windows system, so it is
> cvs that is helping, not Linux.

And MPLAB has CVS/Subversion support as well. I have not yet used the
feature though.

I've tried many things on Linux and FreeBSD for PIC development and I
have to say that Windows is still the best platform for PIC development
even though open source PIC tools are getting better and better.
And the major open source PIC programs are supported under Windows
as well (gputils/gpsim/sdcc). The binary packages under Windows for
gpsim make it even easier to install than under Linux.

Of course there are some interesting programs which are Linux only.
Ktechlab looks quite interesting.

Please do not flame me though. ;-)

Regards,
Xiaofan

2006\01\13@060448 by Gerhard Fiedler

picon face
Wouter van Ooijen wrote:

>> Another way Linux could help in PIC development is CVS (Concurrent
>> Versions System).
>
> I have been using cvs for years with only my windows system, so it is
> cvs that is helping, not Linux.

I'm using cvsnt http://cvsnt.org/wiki -- available for Windows and
Linux/Unix type systems. Has a number of interesting features that the
original CVS doesn't have.

Either way, something like this is highly recommended for anybody who's
doing anything development. Being able to go back to past versions of
whatever files (not only program sources, also schematics and PCB files,
client documents, etc) in an organized way is well worth the effort to set
it up once.

If anybody needs some help with getting cvsnt set up on Windows or work
with it on any platform, just holler :)

Gerhard

2006\01\13@064902 by Peter Todd

picon face
On Thu, Jan 12, 2006 at 09:43:31PM -0800, Chetan Bhargava wrote:

Definetely. My setup right now is cvs along with the unix program
unison. CVS lets me keep track of revisions to my sourcecode, electrical
diagrams and hardware cad drawings, yeah, cvs works fine for binaries
too, though the diff function doesn't work. unison is rather neat, it's
a program that takes two directories, on the same or different machines,
and "unifies" them to have a consistant state. Basically I can have one
copy of all my files on my laptop, one on my personal server, and one at
my studio, and the changes are automatically and intellegently
propegated from one to the other.

Yeah, I use Linux exclusively and love it. make is also an excellent
program once you see what it can do. In one recent project I used it to
automatically keep track of serial numbers and device numbers in a
device with 64 seperate pic chips controlling 64 stepper motors. Did all
the record keeping automatically. Other times I've used it, along with
perl scripts, to automatically generate stuff like microstepping stepper
motor tables automatically.

Now if only the actual PIC support was as good as it is on windows. :(

{Quote hidden}

--
.....peteKILLspamspam.....petertodd.ca http://www.petertodd.ca

2006\01\13@074214 by Danny Sauer

flavicon
face
Wouter wrote regarding 'RE: [PIC] CVS for PIC development' on Fri, Jan 13 at 01:06:
> I have been using cvs for years with only my windows system, so it is
> cvs that is helping, not Linux.

Anyone using CVS on Windows owes it to themselves to check out
TortoiseCVS at http://www.tortoisecvs.org/.  The shell integration is
*way* nice.  If you're looking for the ability to just easily
version-control your files, Tortoise even makes it pretty darned easy
to create a local CVS "server" without having to learn a command-line
tool or go through the hassle of setting up a pserver.

As a couple of other people have said, Subversion is basically the
"next" open CVS.  SVN is a little harder to set up initially, though
it addresses some of the biggest problems with CVS (versioning of
directories, renaming files, fine-grained authentication) and I like
SVN's snapshot-like versioning better than CVS's file-based versioning
scheme.  The automatic tagging of each release makes rolling back to
old versions more useful, and using the Apache mod_svn method
consolidates the SVN and Apache adminsitration locations (which I
guess one would mostly just care about if one was the admin).  There's
a TortoiseSVN which is every bit as nice as TortoiseCVS, BTW.
http://tortoisesvn.tigris.org/

--Danny

2006\01\13@080954 by Marc Nicholas

picon face
Subversion is most definitely the way to go...seems like a few others have
found the light, too ;-)

-marc

On 1/13/06, Danny Sauer <EraseMEpiclistspam_OUTspamTakeThisOuTdannysauer.com> wrote:
{Quote hidden}

> -

2006\01\13@084131 by John Pfaff

picon face
There is also a Tortoise for subversion.
http://tortoisesvn.tigris.org/

JP

Marc Nicholas wrote:

{Quote hidden}

>>-

2006\01\13@084243 by John Pfaff

picon face
Sorry, didn't read the whole message before I shot my mouth off :)

John Pfaff wrote:

{Quote hidden}

2006\01\13@101811 by Keith

flavicon
face
I totally agree, a developer must use version control in a real development
environment.
Whether it's CVS, subversion or even SourceSafe, not using version controll
and regular backups is sucide.

Besides, how else can you go back and fix that bug 8 versions ago without
changing the current version?
Not to mention concurrent development.

2006\01\13@160647 by Chetan Bhargava

picon face
Hi Xiaofan,

I just wanted to add another entry in your list of things linux can do
for PIC :-)


On 1/12/06, Chen Xiao Fan <KILLspamxiaofanKILLspamspamsg.pepperl-fuchs.com> wrote:
{Quote hidden}

--
Chetan Bhargava
Web: http://www.bhargavaz.net
Blog: http://microz.blogspot.com

2006\01\13@162005 by David VanHorn
picon face
I don't do PICs, but I use tortoise CVS client under windows for my AVR
development.

2006\01\13@190151 by Xiaofan Chen

face picon face
On 1/14/06, Chetan Bhargava <RemoveMEcbhargavaTakeThisOuTspamgmail.com> wrote:
> Hi Xiaofan,
>
> I just wanted to add another entry in your list of things linux can do
> for PIC :-)
>

It is really amazing that the GNU PIC community has achieved during
the last year. I only started to try out Linux for PIC development last
May with Ubuntu 5.04 Hoary. And now I could do most of the things
I need for PIC development under Linux (to a less extent also on
FreeBSD which I have used less than two weeks).

A short list:
1) under wine, MPASM, MPLAB C18, MPASM30, MPLAB C30 are all
working.
2) Assembler: gputils is quite matured
3) Simulator: gpsim is quite functional. Ktechlab is an interesting software
as well. It uses gpsim as the PIC simulation engine.
4) C compiler: SDCC is now much better with both PIC16 port and even
the PIC14 port is now functional.
5) Programmer: PS+, PICkit 2, Wisp628A and all kinds of simpler
programmers are supported under Linux
6) IDE: pikdev and piklab are quite nice software.
7) Other tools: JAL is a pascal like compiler for PIC.
8) GNUPIC has a list of tools for PIC development for Linux.

And Mac OS X users often can port the Linux tools to Mac. So do
FreeBSD users.

Now I am actually trying MPLAB ICD2 under Linux (programmer only)
using piklab-0.2. Again I am mainly trying this under Ubuntu (5.10 Breezy,
newly installed) since I am having some problems under Fedora Core 4.

I will say the only obstacle for me is the ICD2 debugging function. But
the author of piklab does have a plan to support it as well. That will be
a tough job though without the debugging specifications and communication
protocols. And I think a lot of the people may not need the ICD2 debugging
function since other alternative methods to provide debugging function
exist.


Regards,
Xiaofan

2006\01\14@042424 by Peter Todd

picon face
On Fri, Jan 13, 2006 at 06:42:14AM -0600, Danny Sauer wrote:
> As a couple of other people have said, Subversion is basically the
> "next" open CVS.  SVN is a little harder to set up initially, though
> it addresses some of the biggest problems with CVS (versioning of
> directories, renaming files, fine-grained authentication) and I like
> SVN's snapshot-like versioning better than CVS's file-based versioning
> scheme.  The automatic tagging of each release makes rolling back to
> old versions more useful, and using the Apache mod_svn method
> consolidates the SVN and Apache adminsitration locations (which I
> guess one would mostly just care about if one was the admin).  There's
> a TortoiseSVN which is every bit as nice as TortoiseCVS, BTW.
> http://tortoisesvn.tigris.org/

Something that turned me off subversion, though I can't say I researched
it very thoughly, was that I've been told it really needs to have a
seperate repository for each project to work properly. In my case I
really would like to use the system I use in cvs, which is to have my
repository in my data directory and check out projects into that data
directory. This data directory then is synched to my server and work
computers. cvs is used in the file mode, no server.

Will subversion work in such a situation? I thought it needed an Apache
server to work, IE you can just set a random directory to be your
repository.

I use cvs for a lot of really small projects, as well as big ones, so a
subversion server would be overkill, though I like it's features a lot.
Kind of like the proprietary Perforce program I used a lot when I was a
programmer at one company.

--
spamBeGonepetespamBeGonespampetertodd.ca http://www.petertodd.ca

2006\01\14@044052 by Shawn Wilton

picon face
Peter, you'll be just fine.  Yes, you can have more than one project in a
repo.

I personally have *everything* in SVN.  I keep my CV in there, all my
documents in the "My Docs" folder on windows.  I keep all my development
materials in there.  I mean I have **gigs** worth of material in SVN repos.
Right now my dev directory is 1.5 gigs in size.  I also have about 5 gigs
worth of photos in another repo.

What you would want to do is possibly have multiple repos for projects that
different people will have access to.  But I would say the same of CVS.
Just makes the permissions issue a little easier.  Though you can do
multiple projects and permissions, etc. just fine with one repo.

I don't know if it will work in your case or not.  Ideally you have one
location as a repo, and push and pull from it as you need.  You don't
usually copy the repo from location to location...Suppose you could do it
that way though.  TortoiseSVN gives you the option of creating a local
repo.  So I would say sure, though I've never tried it.  I've always
uploaded all my materials to my server with RAID5 on it.  You do NOT need
Apache to use SVN.  There just happens to be an apache module that makes SVN
use "easier".



On 1/14/06, Peter Todd <TakeThisOuTpeteEraseMEspamspam_OUTpetertodd.ca> wrote:
{Quote hidden}

> -

2006\01\14@045332 by Peter Todd

picon face
On Sat, Jan 14, 2006 at 01:40:51AM -0800, Shawn Wilton wrote:
> Peter, you'll be just fine.  Yes, you can have more than one project in a
> repo.
>
> I personally have *everything* in SVN.  I keep my CV in there, all my
> documents in the "My Docs" folder on windows.  I keep all my development
> materials in there.  I mean I have **gigs** worth of material in SVN repos.
> Right now my dev directory is 1.5 gigs in size.  I also have about 5 gigs
> worth of photos in another repo.

That's great news! Sounds just like how I'd probably set things up.

> What you would want to do is possibly have multiple repos for projects that
> different people will have access to.  But I would say the same of CVS.
> Just makes the permissions issue a little easier.  Though you can do
> multiple projects and permissions, etc. just fine with one repo.

Sounds about right. Fine in my case, all my projects are single
developer for now. Arts school is not known for it's technical
collaborations...

> I don't know if it will work in your case or not.  Ideally you have one
> location as a repo, and push and pull from it as you need.  You don't
> usually copy the repo from location to location...Suppose you could do it
> that way though.  TortoiseSVN gives you the option of creating a local
> repo.  So I would say sure, though I've never tried it.  I've always
> uploaded all my materials to my server with RAID5 on it.  You do NOT need
> Apache to use SVN.  There just happens to be an apache module that makes SVN
> use "easier".

Thanks for your help, I think I'll take a close look at the subversion
docs sometime, see exactly how I can go about setting that up. Of course
it may well be the case that this sort of local archive that I want is a
new feature, don't remember hearing about it before, but we'll see.

And anyway, it's opensource, and I'm somewhat of a programmer!

--
peteEraseMEspam.....petertodd.ca http://www.petertodd.ca

2006\01\14@050548 by Shawn Wilton

picon face
If you need any assistance, feel free to contact me off list.

On 1/14/06, Peter Todd <EraseMEpetespampetertodd.ca> wrote:
{Quote hidden}

> -

2006\01\14@170005 by Gerhard Fiedler

picon face
Peter Todd wrote:

> yeah, cvs works fine for binaries
> too, though the diff function doesn't work.

cvsnt has a new server side diff plugin that allows you to define custom
diff programs -- where you have one for a specific type of binary or other
file. (Often being a text file is not enough for a normal diff to work --
there are complex XML text files that just don't work well through a common
diff, even though they are "text" files.)

> unison is rather neat, it's a program that takes two directories, on the
> same or different machines, and "unifies" them to have a consistant
> state.

Also as Windows binary available, equally handy -- for the ones from the
dark side :)

> make is also an excellent program once you see what it can do. In one
> recent project I used it to automatically keep track of serial numbers
> and device numbers in a device with 64 seperate pic chips controlling 64
> stepper motors. Did all the record keeping automatically. Other times
> I've used it, along with perl scripts, to automatically generate stuff
> like microstepping stepper motor tables automatically.

What I don't like with make is the dependency tracking. Say you have
fileA.h that includes fileB.h. fileA.c includes fileA.h, and therefore is
dependent on both fileA.h and fileB.h. But if you tell make only the
existing dependencies, you'd have to create a special rule that bumps the
timestamp of fileA.h if fileB.h is newer, so that fileA.c gets rebuilt if
fileB.h changes. But that then doesn't go well with a versioning system,
because fileA.h didn't actually change.

The only workaround I've found so far is to "manually" do the dependency
tracking in the make file, through variables that contain the dependency
lists. Which is a pain.

I thought about looking into making ant work for C builds. Anybody any
experience? Would it handle the dependencies better?

Gerhard

2006\01\14@174152 by John Temples

flavicon
face
On Sat, 14 Jan 2006, Gerhard Fiedler wrote:

> What I don't like with make is the dependency tracking. Say you have
> fileA.h that includes fileB.h. fileA.c includes fileA.h, and therefore is
> dependent on both fileA.h and fileB.h. But if you tell make only the
> existing dependencies, you'd have to create a special rule that bumps the
> timestamp of fileA.h if fileB.h is newer, so that fileA.c gets rebuilt if
> fileB.h changes. But that then doesn't go well with a versioning system,
> because fileA.h didn't actually change.
>
> The only workaround I've found so far is to "manually" do the dependency
> tracking in the make file, through variables that contain the dependency
> lists. Which is a pain.

"makedepend" will generate a dependency list and insert it into your
makefile, and it handles cases like you mention above (without
modifying timestamps).  This requires you to manually run makedepend
when you change your dependencies, but it doesn't require you to
manage the dependencies yourself.

With some clever scripting, you can have makedepend run when you do a
"make" and keep all the dependencies updated automatically.

--
John W. Temples, III

2006\01\14@194705 by Peter Todd

picon face
On Sat, Jan 14, 2006 at 02:41:50PM -0800, John Temples wrote:
> "makedepend" will generate a dependency list and insert it into your
> makefile, and it handles cases like you mention above (without
> modifying timestamps).  This requires you to manually run makedepend
> when you change your dependencies, but it doesn't require you to
> manage the dependencies yourself.
>
> With some clever scripting, you can have makedepend run when you do a
> "make" and keep all the dependencies updated automatically.

Also if you are writing actual Linux/Unix/Whatever programs, rather than
microprocessor stuff, take a look at autoconf and automake. It's the
standard build setup for opensource programs these days. It
(essentially) takes a list of source files and automatically generates
the makefiles for you as well as determining details like what
C-compiler to use, what features are available on the current system
etc. It's what allows opensource software to be portable accross so many
different architectures and environments, and it handles dependencies
quite nicely.

That said, on all my PIC projects to date I don't even use dependencies.
Everything is compiled into one static image. Microprocessor code is so
small, and modern computers are so fast, it's not even worth the bother
I find for my projects. We're not talking about the millions of lines of
code in the Linux kernel after all!

--
RemoveMEpetespam_OUTspamKILLspampetertodd.ca http://www.petertodd.ca

2006\01\15@120030 by Gerhard Fiedler

picon face
Peter Todd wrote:

> John Temples wrote:
>> "makedepend" will generate a dependency list and insert it into your
>> makefile, and it handles cases like you mention above (without
>> modifying timestamps).  This requires you to manually run makedepend
>> when you change your dependencies, but it doesn't require you to
>> manage the dependencies yourself.

Thanks, that looks like a helpful tool. It seems to have a pitfall (header
file dependencies get parsed only once, and when they depend on
preprocessor definitions that change with the file where they are included,
the later dependencies may be wrong), but that's probably a not so common
case. I'll see how that works out...

I'd still like a make tool that can handle recursive dependencies without a
build step in between.


> That said, on all my PIC projects to date I don't even use dependencies.
> Everything is compiled into one static image. Microprocessor code is so
> small, and modern computers are so fast, it's not even worth the bother
> I find for my projects. We're not talking about the millions of lines of
> code in the Linux kernel after all!

Yes, but without dependencies, there's no need for make at all. Without
them, the makefile collapses usually to a single line batch file. OTOH,
there are quite a number of projects with complexities between the Linux
kernel and trivial projects that only need a couple files. I find that most
of my embedded projects are in that space between :)

Gerhard

2006\01\15@155127 by Peter Todd

picon face
On Sun, Jan 15, 2006 at 03:00:02PM -0200, Gerhard Fiedler wrote:
> > That said, on all my PIC projects to date I don't even use dependencies.
> > Everything is compiled into one static image. Microprocessor code is so
> > small, and modern computers are so fast, it's not even worth the bother
> > I find for my projects. We're not talking about the millions of lines of
> > code in the Linux kernel after all!
>
> Yes, but without dependencies, there's no need for make at all. Without
> them, the makefile collapses usually to a single line batch file. OTOH,
> there are quite a number of projects with complexities between the Linux
> kernel and trivial projects that only need a couple files. I find that most
> of my embedded projects are in that space between :)

And often those trivial projects grow to not so trivial projects... Or
you just cut-n-pasted your development directory from a non-trivial
project to a trivial one... I do that a *lot*

In any case make does have a decent, and simple, syntax so your targets
can be "make" "make burn" "make new_serial_number" whatever. Anyway,
even if my source code is a single image, often I do have dependencies
on automagically generated tables of values, such as serial numbers,
stepper motor driver tables and math constants. Those can take a little
longer to build from scratch.

--
RemoveMEpeteTakeThisOuTspamspampetertodd.ca http://www.petertodd.ca

2006\01\15@215750 by kravnus wolf

picon face
wow, makedepend makes makefile generation a breeze :)

John

--- Gerhard Fiedler <EraseMElistsspamspamspamBeGoneconnectionbrazil.com>
wrote:

{Quote hidden}

> --

2006\01\16@043640 by Alan B. Pearce

face picon face
>I personally have *everything* in SVN.  I keep my CV in
>there, all my documents in the "My Docs" folder on windows.  
>I keep all my development materials in there.  I mean I
>have **gigs** worth of material in SVN repos. Right now
>my dev directory is 1.5 gigs in size.  I also have about
>5 gigs worth of photos in another repo.

Is this on a personal server? If so how do you back it up, and how often?

2006\01\16@051654 by Shawn Wilton

picon face
Yes, it's on a server currently being hosted in MA by a friend.  Use to be
local, but I lost my free local pipe, so now a friend colo's it for me
(still free).

It's run on RAID 5, so I should back it up, but the odds of losing more than
one disk drive at a time is extremely low, so I just let the raid take care
of it.  If there's something I *really* care about, like my photo collection
(family, friends, pets, many of whom have passed), then I do one of two
things.  Either I make a tarball of the repo on the server and download it
to my home machine and burn it to DVDR, or I do a dump of the repository to
a dump file, and turn that in to a tarball, and download and burn to DVDR.

Since the repo is in two places at once at any given time (on the server,
and locally), I can always make a backup of the current snapshot on my local
machine.  But doing so would lose the history aspect of the system.

As for the RAID, I have found it to be extremely dependable.  I have gone
through several drives on my current server, and it hasn't skipped a beat
yet.  Just replace the dead drive, do a recover on the raid array, and all
is well.


Oh, and as for how often, I usually back things up when I make major
changes.  Most things don't change often, so I don't find a need to download
and burn that often.  Maybe once every 6 months or so.

On 1/16/06, Alan B. Pearce <RemoveMEA.B.PearceKILLspamspamrl.ac.uk> wrote:
>
> >I personally have *everything* in SVN.  I keep my CV in
> >there, all my documents in the "My Docs" folder on windows.
> >I keep all my development materials in there.  I mean I
> >have **gigs** worth of material in SVN repos. Right now
> >my dev directory is 1.5 gigs in size.  I also have about
> >5 gigs worth of photos in another repo.
>
> Is this on a personal server? If so how do you back it up, and how often?
> -

2006\01\16@054340 by Alan B. Pearce

face picon face
>Oh, and as for how often, I usually back things up when
>I make major changes.  Most things don't change often,
>so I don't find a need to download and burn that often.
>Maybe once every 6 months or so.

OK, seems fair enough.

One other question, do any of the CVS systems allow incremental backups,
where only items changed since a last full backup would be backed up? This,
to me anyway, would seem to be a sensible way of doing it without needing to
spend money on a tape drive or whatever, to do a full backup each time.

2006\01\16@072639 by Shawn Wilton

picon face
With SVN, you can dump specific revisions.  I was actually thinking about
writing a script that does incremental backups in this fashion.

So you could use "svnadmin dump -r25:26 > svn_dump/25.26.dump and then just
dump that file somewhere.

I really can't stress how useful and simple CVS (concurrent versioning
systems) systems have become.

I can't figure out why *everyone* isn't using something.  I mean seriously.
Takes 10 minutes to setup.


On 1/16/06, Alan B. Pearce <A.B.PearceSTOPspamspamspam_OUTrl.ac.uk> wrote:
{Quote hidden}

> -

2006\01\16@073736 by Gerhard Fiedler

picon face
Alan B. Pearce wrote:

> One other question, do any of the CVS systems allow incremental backups,
> where only items changed since a last full backup would be backed up? This,
> to me anyway, would seem to be a sensible way of doing it without needing to
> spend money on a tape drive or whatever, to do a full backup each time.

Yes and no, depending on what you mean by "item".

CVS is based on RCS files. Every file in your sandbox corresponds to a file
in the repository, which contains the complete history of that file. The
file changes whenever you commit to it.

So yes, if you do incremental backups (with a backup app that supports
that, like xcopy :) you would back up only the files that changed since
last time.

But no, you would not only back up what changed, you would back up all
files that changed -- including the history parts that didn't change. (Say
you change a character in a 100k file, you'd still back up the 100k file.)

Sector-oriented incremental backups may work differently, but that's a
different story.

Gerhard

2006\01\16@074548 by Shawn Wilton

picon face
There you go Alan, the answer to your question.  Use SVN.  :-)

On 1/16/06, Gerhard Fiedler <spamBeGonelistsSTOPspamspamEraseMEconnectionbrazil.com> wrote:
{Quote hidden}

> -

2006\01\16@080153 by Alan B. Pearce

face picon face
>But no, you would not only back up what changed, you
>would back up all files that changed -- including the
>history parts that didn't change. (Say you change a
>character in a 100k file, you'd still back up the 100k file.)

OK, that wasn't what I was aiming for, I was hoping there would be some way
of having the 100k file already backed up, and then backing up whatever it
uses as a link for the just the change.

>Sector-oriented incremental backups may work differently,
>but that's a different story.

yeah, that definitely isn't where I was seeking to go.

2006\01\16@080428 by Alan B. Pearce

face picon face
>There you go Alan, the answer to your question.  Use SVN.  :-)

Umm, but what I took from your posting is that I would still get a copy of
the complete file. Is that not so? After all one would be dumping the
"complete revision" as it would be when fed to the compiler, no?

2006\01\16@081414 by Shawn Wilton

picon face
What do you mean by compiler?  SVN allows you to dump revisions
incrementally if you want, so you would basically get a diff against the
previous revision.


On 1/16/06, Alan B. Pearce <KILLspamA.B.PearcespamBeGonespamrl.ac.uk> wrote:
>
> >There you go Alan, the answer to your question.  Use SVN.  :-)
>
> Umm, but what I took from your posting is that I would still get a copy of
> the complete file. Is that not so? After all one would be dumping the
> "complete revision" as it would be when fed to the compiler, no?
>
> -

2006\01\16@090434 by Danny Sauer

flavicon
face
Peter wrote regarding 'Re: [PIC] CVS for PIC development' on Sat, Jan 14 at 03:27:
> Something that turned me off subversion, though I can't say I researched
> it very thoughly, was that I've been told it really needs to have a
> seperate repository for each project to work properly.

For some definitions of "properly".  Subversion increments version
numbers repository-wide.  Some people find this to be a problem, but
some people will complain about anything. :)  If you want discrete
version numbers per-project, you'll need discrete repositories
per-project.  If you can look at the respotiry's version numbers as an
internal piece of data without much external significance, you'll be
fine with one repos.

> In my case I
> really would like to use the system I use in cvs, which is to have my
> repository in my data directory and check out projects into that data
> directory. This data directory then is synched to my server and work
> computers. cvs is used in the file mode, no server.
>
> Will subversion work in such a situation? I thought it needed an Apache
> server to work, IE you can just set a random directory to be your
> repository.

It sounds like you're basically looking for RCS, not CVS (which is
little more than an RCS wrapper).  SVN can run as an Apache server
module or as a stand alone pserver-like server.  I've never used the
file mode, but it's supposedly an option. :)

For a bunch of small projects, I'd set up a single SVN repository and
then just add directories to the SVN server.  Basically, treat it as a
version-controlled backup server.  Actually, that's what I do - since
you don't *have* to check out the root level of a repository, you just
use subdirectories and check those out as appropriate.  Set up another
machine to check the root out periodically and you have an automatic
backup (albeit w/ lost version info).

--Danny

2006\01\16@115527 by Danny Sauer

flavicon
face
Gerhard wrote regarding 'Re: [PIC] CVS for PIC development' on Mon, Jan 16 at 09:35:
> But no, you would not only back up what changed, you would back up all
> files that changed -- including the history parts that didn't change. (Say
> you change a character in a 100k file, you'd still back up the 100k file.)

If you used something like svnmirror as your backup system, you would
technically only "back up" the changes, as the only new information
that's stored on the mirror / backup is the difference.  It's not real
clear here what the purpose of wanting incremental backups is, but if
it's conservation of bandwidth or storage space, mirroring the svn
repository (whether with svnmirror, rsync, xcopy, or something else)
is probably the way to go.

Or use svk - http://svk.elixus.org/

--Danny

2006\01\16@141626 by Chetan Bhargava

picon face
Although I have used cvs clients (winCVS) on windows in the past, I
have never used or configured a CVS server on windows. I'm not sure
how the functionality is on windows as compared to unix. Below is an
excerpt from Open Source Development with CVS, 3rd Edition.

"The Windows and Macintosh distributions of CVS are generally limited
in functionality. They can all act as clients, meaning that they can
contact a repository server to obtain a working copy, commit, update,
and so on. But they can't serve repositories themselves. If you set it
up right, the Windows port can use a local-disk repository, but it
still can't serve projects from that repository to other machines. In
general, if you want to have a network-accessible CVS repository, you
must run the CVS server on a Unix box."

-- http://cvsbook.red-bean.com/

--
Chetan Bhargava
Web: http://www.bhargavaz.net
Blog: http://microz.blogspot.com

2006\01\16@145148 by Gerhard Fiedler

picon face
Alan B. Pearce wrote:

>>There you go Alan, the answer to your question.  Use SVN.  :-)
>
> Umm, but what I took from your posting is that I would still get a copy of
> the complete file. Is that not so? After all one would be dumping the
> "complete revision" as it would be when fed to the compiler, no?

What Shawn told you is a diff, not a backup. A diff between two revisions
is not really an "incremental backup". And one shouldn't do backups of a
sandbox anyway, you should do backups of the repository. Which has nothing
to do with what Shawn wrote.

Gerhard

2006\01\16@150058 by Gerhard Fiedler

picon face
Shawn Wilton wrote:

> There you go Alan, the answer to your question.  Use SVN.  :-)

We've been talking about completely different things. Not sure you
noticed... Alan wanted to know about backup, and "incremental backup" in
that. That's a pretty standard term, and not really related to revision
differences from a version control system. When thinking about backup
(incremental or otherwise), I'd strongly recommend that you back up your
repository, not a sandbox or an export.

Anyway, if you are so inclined, of course you can dump revision differences
with cvs/cvsnt also. But I wouldn't call that a "backup", that's why I
didn't mention it. So maybe jump less high next time when you read
something about cvs/cvsnt... it may be that you're missing something :)

Gerhard

2006\01\16@150856 by Gerhard Fiedler

picon face
Danny Sauer wrote:

> Set up another machine to check the root out periodically and you have an
> automatic backup (albeit w/ lost version info).

Now why would you do /that/? Can't you back up the repository itself, with
all version info? I really would not want to lose all of that... That's why
I back up my cvsnt /repository/, not a shadow or sandbox.

Gerhard

2006\01\16@152635 by Gerhard Fiedler

picon face
Chetan Bhargava wrote:

> Although I have used cvs clients (winCVS) on windows in the past, I
> have never used or configured a CVS server on windows. I'm not sure
> how the functionality is on windows as compared to unix.

cvs is not really the way to go on Windows. The cvs port to Windows was
originally cvsnt, which then developed some interesting features and got
ported back to Linux/Unix/Mac. While still based on the cvs code, it is a
quite different beast now, and has many features that cvs doesn't have. So
currently there are two source bases that are commonly referred to as
"cvs": the original cvs and cvsnt. The command, protocol and repository
structure are still quite similar, and cvs clients can talk to cvsnt
servers and do the basic operations, and cvsnt clients can talk to cvs
servers.

> Below is an excerpt from Open Source Development with CVS, 3rd Edition.
> [...]

What was written there may be true for the original cvs, but it's
definitely ignoring the existence of cvsnt as a widely compatible fork of
cvs. I (as many others) have a (true) cvsnt server running on a Win2k box,
and it runs just fine.

You can find a (maybe not completely balanced) comparison between cvsnt and
cvs at http://www.march-hare.com/cvspro/compare.htm

Gerhard

2006\01\16@165046 by Peter Todd

picon face
On Mon, Jan 16, 2006 at 05:49:48PM -0200, Gerhard Fiedler wrote:
> Alan B. Pearce wrote:
>
> >>There you go Alan, the answer to your question.  Use SVN.  :-)
> >
> > Umm, but what I took from your posting is that I would still get a copy of
> > the complete file. Is that not so? After all one would be dumping the
> > "complete revision" as it would be when fed to the compiler, no?
>
> What Shawn told you is a diff, not a backup. A diff between two revisions
> is not really an "incremental backup". And one shouldn't do backups of a
> sandbox anyway, you should do backups of the repository. Which has nothing
> to do with what Shawn wrote.

I'd disagree on that. Backing up your sandbox should be done the most
often, because you sandbox is what you are most likely to delete by
accident, wiping away your most recent changes.

That said, having that backup be done on the same computer is probably
just fine, and the increments can be deleted quite regularly as anything
old will be handled by revision control.

--
EraseMEpetespamEraseMEpetertodd.ca http://www.petertodd.ca

2006\01\16@170150 by Danny Sauer

flavicon
face
Gerhard wrote regarding 'Re: [PIC] CVS for PIC development' on Mon, Jan 16 at 15:39:
> Danny Sauer wrote:
>
> > Set up another machine to check the root out periodically and you have an
> > automatic backup (albeit w/ lost version info).
>
> Now why would you do /that/? Can't you back up the repository itself, with
> all version info? I really would not want to lose all of that... That's why
> I back up my cvsnt /repository/, not a shadow or sandbox.

Well, in the case of subversion, you have all of the history
information checked out in the local repository.  SVN trades some
local disk space for a big reduction in network bandwidth, which is
part of the reason it feels significantly faster.  So with SVN it
doesn't *really* matter - I'm pretty sure you can reconstruct a
repository from any local copy therof.

In a previous life, I managed some web content in CVS.  There were two
web servers - the development server which held a checked-out version
of the code, and a public server (actually two behind a load-balancing
proxy, technically) which had whatever was tagged as stable (using
branches, IIRC).  The version information wasn't really important, as
developers would make a copy to mess with and when their local version
was OK they'd check it back into CVS.  The hooks on the CVS server
would push the updated file up to the production server when anything
was checked in.  So that was automatic, and didn't take a lot of
bandwidth - it had a side effect of making a full curent backup of the
repositiory on two machines.  Sure, version info would have been lost,
but we were using CVS more for the ability to keep multiple people's
work in sync and becuase it was a convenient way to manage stable v/s
unstable site uploading - the version info didn't matter and would
have been wasted space in that admittedly atypical situation. :)

--Danny

2006\01\16@172345 by Gerhard Fiedler

picon face
Peter Todd wrote:

> I'd disagree on that. Backing up your sandbox should be done the most
> often, because you sandbox is what you are most likely to delete by
> accident, wiping away your most recent changes.

Depends on how you work. I /use/ my repository. Which means that when
someone other would run a backup on a sandbox, I usually commit the files
and run a backup on the repository. Gives me the best of both worlds and
then some: I have always a current backup of the most recent changes /and/
I have backed up all history.

Since the backup is incremental, only changed files get backed up. Which is
not that much, and runs quite fast.

Gerhard

2006\01\16@172846 by Gerhard Fiedler

picon face
Danny Sauer wrote:

> Well, in the case of subversion, you have all of the history information
> checked out in the local repository.  SVN trades some local disk space
> for a big reduction in network bandwidth, which is part of the reason it
> feels significantly faster.  

This is interesting. There are a number of "far remote" situations with
bandwidth problems where this could be interesting.

> So with SVN it doesn't *really* matter - I'm pretty sure you can
> reconstruct a repository from any local copy therof.

OTOH, if using anything for backup purposes, I'd go farther than being
"pretty sure" that it works :)

Gerhard

2006\01\16@175307 by Alex Harford

face picon face
On 1/16/06, Gerhard Fiedler <@spam@lists@spam@spamspam_OUTconnectionbrazil.com> wrote:
>
> What Shawn told you is a diff, not a backup. A diff between two revisions
> is not really an "incremental backup". And one shouldn't do backups of a
> sandbox anyway, you should do backups of the repository. Which has nothing
> to do with what Shawn wrote.

I disagree.  Shawn's command does a dump of the repository.  If you
store the diffs between every revision, you will be able to restore
the repository by doing an 'svnadmin load'.  Obviously you wouldn't do
it by hand, it would be done by a script.

Alan, I think you are confused about how Subversion works... when you
get the difference between two revisions in a repository, it gives you
a text file that lists only what has changed, line by line.

Index: path/to/file.txt
==================================================================
--- path/to/file.txt        (revision 1767)
+++ path/to/file.txt        (revision 1769)
@@ -12,9 +12,6 @@
Some text
Some more text
    Blah blah blah
-    Extra lines that don't need to be here
-    Extra lines that don't need to be here
-    Extra lines that don't need to be here
    Blah blah blah
    Blah blah blah
    Blah blah blah

2006\01\16@190958 by Shawn Wilton

picon face
Yeah, except the diffs would work just fine as a backup.  You can dump
incrementally and then piece the repo back together with all version history
if need be from the revisions.  So I would disagree with you here.  Also,
this isn't from the sandbox, it's from the repo directly. It's an exact copy
of the repo for those specific versions.  If I want, I can have a repo with
10 revisions.  I can split the revisions 1-5, and 6-10 in to two separate
files.  Reimport and I have the exact same repo as I started with.  I would
consider that a fine backup option.  It's not ideal, but it's certainly
still a backup.

On 1/16/06, Gerhard Fiedler <spamBeGonelistsspamKILLspamconnectionbrazil.com> wrote:
{Quote hidden}

> -

2006\01\16@191032 by Shawn Wilton

picon face
>
> Anyway, if you are so inclined, of course you can dump revision
> differences
> with cvs/cvsnt also. But I wouldn't call that a "backup", that's why I
> didn't mention it. So maybe jump less high next time when you read
> something about cvs/cvsnt... it may be that you're missing something :)



See my previous email.


Gerhard
>
> -

2006\01\16@191314 by Gerhard Fiedler

picon face
Alex Harford wrote:

>> What Shawn told you is a diff, not a backup. A diff between two revisions
>> is not really an "incremental backup". And one shouldn't do backups of a
>> sandbox anyway, you should do backups of the repository. Which has nothing
>> to do with what Shawn wrote.
>
> I disagree.  

With what exactly?

With "What Shawn told you is a diff"? Apparently not.

With "A diff between two revisions is not really an 'incremental backup'"?
Usually you do incremental backups to save space and time. Usually, storing
diffs exported from a repository doesn't do either. (For one thing, it
duplicates every removed and changed line.) That's why I said that there is
a difference between storing a diff and what's usually called "incremental
backup" -- even though file diffs can be used for that. But probably not
efficiently.

Why wouldn't you take simply a backup application (or suitable script) and
run it over the repository? After all, the OP didn't ask about clever ways
to abuse version control systems as backup, he asked how to backup the
repository :)

> Alan, I think you are confused about how Subversion works... when you
> get the difference between two revisions in a repository, it gives you
> a text file that lists only what has changed, line by line.

That's not a special feature of subversion... pretty much all version
systems do that. A handy feature, and one in the core of any version
system. But not really a good backup method, IMO.

Gerhard

2006\01\16@193028 by Alex Harford

face picon face
On 1/16/06, Gerhard Fiedler <.....listsspam_OUTspamconnectionbrazil.com> wrote:
>
> Why wouldn't you take simply a backup application (or suitable script) and
> run it over the repository? After all, the OP didn't ask about clever ways
> to abuse version control systems as backup, he asked how to backup the
> repository :)

Because you shouldn't run a file level backup on a live repository.
If you are the only user and you can shut down the server, fine, but
if you have multiple users that you want to allow access without
interruption, then you should be using the 'svnadmin dump' command.
Dump is guaranteed to work across all versions, and across backend
types (FSFS or BDB).

Alex

2006\01\16@193430 by Shawn Wilton

picon face
He was asking for a way to backup only the changes.  I think an incremental
dump pretty much does that.  Hence, I disagree with your statement that an
incremental dump is not a backup.


On 1/16/06, Gerhard Fiedler <TakeThisOuTlists.....spamTakeThisOuTconnectionbrazil.com> wrote:
{Quote hidden}

> -

2006\01\16@205959 by Alex Harford

face picon face
On 1/16/06, Shawn Wilton <TakeThisOuTblack9KILLspamspamspamgmail.com> wrote:
> He was asking for a way to backup only the changes.  I think an incremental
> dump pretty much does that.  Hence, I disagree with your statement that an
> incremental dump is not a backup.

Especially if you have a script that added the diffs between revisions
to a tar.gz file.  Since the text in the diffs is already in the file
from previous versions, it should compress quite nicely.

Alex

2006\01\17@044221 by Alan B. Pearce

face picon face
>What Shawn told you is a diff, not a backup. A diff
>between two revisions is not really an "incremental
>backup". And one shouldn't do backups of a sandbox
>anyway, you should do backups of the repository.

OK, I see where you are coming from.

I was thinking in terms of some way of protecting oneself from making a
small change, and then needing to back up a sizeable amount of data, just to
get a backup of the small change, on a home system, where doing a large
backup may be skipped "because most of that is backed up anyway".

It looks to me like having some form of networked machine that one could use
as a mirror would probably be the way to go.

2006\01\17@050341 by Gerhard Fiedler

picon face
Alex Harford wrote:

> Because you shouldn't run a file level backup on a live repository.

That's a pretty blanket statement, and as such plain wrong. I can and do
run file level backups on my repository (cvsnt on Win2k), and there's
nothing problematic with it. The repository is at any point in time in a
consistent state, and so is the backup.

Gerhard

2006\01\17@051129 by Gerhard Fiedler

picon face
Shawn Wilton wrote:

> He was asking for a way to backup only the changes.  I think an incremental
> dump pretty much does that.  Hence, I disagree with your statement that an
> incremental dump is not a backup.

Hey... He was not asking for "a way to backup only the changes", he was
asking "do any of the CVS systems allow incremental backups, where only
items changed since a last full backup would be backed up?" The diff
solution is your interpretation of that, it works, but for me it's not
really the best.

And I didn't make a "statement that an incremental dump is not a backup", I
said "a diff between two revisions is not really an 'incremental backup'".
And I gave reasons for that.

I don't really know what to reply; it would be a lot easier if you didn't
misrepresent the original posters' words.

Gerhard

2006\01\17@051417 by Michael Rigby-Jones

picon face


>-----Original Message-----
>From: .....piclist-bouncesspamRemoveMEmit.edu [RemoveMEpiclist-bouncesspamspamBeGonemit.edu]
>Sent: 17 January 2006 10:03
>To: spamBeGonepiclist@spam@spamspam_OUTmit.edu
>Subject: Re: [PIC] CVS for PIC development
>
>
>Alex Harford wrote:
>
>> Because you shouldn't run a file level backup on a live repository.
>
>That's a pretty blanket statement, and as such plain wrong. I
>can and do run file level backups on my repository (cvsnt on
>Win2k), and there's nothing problematic with it. The
>repository is at any point in time in a consistent state, and
>so is the backup.
>
>Gerhard

But are you checking stuff in and out whilst the backup is being performed? Clearly that would be a bad idea and I think that is what the OP meant by "live".

Regards

Mike

=======================================================================
This e-mail is intended for the person it is addressed to only. The
information contained in it may be confidential and/or protected by
law. If you are not the intended recipient of this message, you must
not make any use of this information, or copy or show it to any
person. Please contact us immediately to tell us that you have
received this e-mail, and return the original to us. Any use,
forwarding, printing or copying of this message is strictly prohibited.
No part of this message can be considered a request for goods or
services.
=======================================================================

2006\01\17@054835 by Gerhard Fiedler

picon face
Alan B. Pearce wrote:

> I was thinking in terms of some way of protecting oneself from making a
> small change, and then needing to back up a sizeable amount of data,
> just to get a backup of the small change, on a home system, where doing
> a large backup may be skipped "because most of that is backed up
> anyway".

I guess that depends on your backup philosophy...  I think the best one is
to have one and then "just do it" -- no matter whether or not or how big
the changes :)

> It looks to me like having some form of networked machine that one could
> use as a mirror would probably be the way to go.

Exactly what I do. I don't bother with micro managing backups and
repository versions -- I just run an incremental backup over my systems
once a day usually (not counting that a commit to a repository, which I do
more often, is already a form of backup: you now have a second copy, one in
your sandbox and one in your repository). Disk-to-disk over the network
makes this so fast that I really don't care whether there are a few megs of
repository files in there :)

At least with the way cvs(nt) stores version history, the individual
repository files rarely grow beyond some 200k or so. (That of course
depends a lot on the file size and the amount and depth of changes you make
to them, but this is a reasonable figure for reasonable situations... E.g.
a 50k file may grow to some 200k or so with all the history.) This should
give you an idea of the size ranges you're dealing with. This is of course
dependent on how the file data is stored in the repository, and I'm talking
about the current versions of cvs and cvsnt. svn may be different, and the
next version of cvsnt will have a database backend.

Gerhard

2006\01\17@064913 by Alan B. Pearce

face picon face
>> It looks to me like having some form of networked machine that
>> one could use as a mirror would probably be the way to go.
>
>Exactly what I do. I don't bother with micro managing backups and
repository versions -- I just run an incremental backup over my
>systems once a day usually (not counting that a commit to a
>repository, which I do more often, is already a form of backup:
>you now have a second copy, one in your sandbox and one in your
>repository). Disk-to-disk over the network makes this so fast that
>I really don't care whether there are a few megs of repository
>files in there :)


OK I think this answers my immediate questions, until I come to actually set
up such a system.

Thanks for the various comments. It has been enlightening to someone who has
"known about" but not used any form of CVS before.

2006\01\17@103747 by Alex Harford

face picon face
On 1/17/06, Gerhard Fiedler <TakeThisOuTlistsspamspamconnectionbrazil.com> wrote:
> Alex Harford wrote:
>
> > Because you shouldn't run a file level backup on a live repository.
>
> That's a pretty blanket statement, and as such plain wrong. I can and do
> run file level backups on my repository (cvsnt on Win2k),

>From the rest of my reply, it's obvious that I was referring to an SVN
repository, not every type of version control system.

> and there's
> nothing problematic with it. The repository is at any point in time in a
> consistent state, and so is the backup.

However, this is still incorrect in regards to CVS.  While you are
doing a backup (tar or what have you), files could be checked in or
moved and you may only grab some of them.  At this point, the repo is
not in a consistent state, and restoring from this backup would
require manual labour to get it working.

Alex

2006\01\17@142159 by Shawn Wilton

picon face
Alan, if you use SVN, and dump the repo with the incremental tag, it's going
to take up little space, and it's a real backup, regardless of what Gerhard
has to say.  It preserves all of your history and your structure.  You are
backing up the genuine repository, not a sandbox.  Instead of duplicating
the effort of what you have already saved, you can just dump the changes
since your last save..  If you want, you could even setup a hook file to
dump the revisions to a specific directory/file every time you make say 5
commits.  Then you would be able to just dump at every interval of 5
revisions.  So 5, 10, 15, etc.

The only way a diff is going to be larger than the original file you would
be backing up, is if the file change by 50% or more, (ignoring the header
portion of the file, and just taking in to account the actual changes).  In
which case you would have one old line and one new line.  Since few people
change a file by 50% or more at a time, I think you would still save space
doing diffs in the long run.

On 1/17/06, Alan B. Pearce <A.B.PearceEraseMEspamrl.ac.uk> wrote:
{Quote hidden}

> -

2006\01\17@142458 by Shawn Wilton

picon face
Inline.

On 1/17/06, Gerhard Fiedler <RemoveMElistsEraseMEspamspam_OUTconnectionbrazil.com> wrote:
>
> Shawn Wilton wrote:
>
> > He was asking for a way to backup only the changes.  I think an
> incremental
> > dump pretty much does that.  Hence, I disagree with your statement that
> an
> > incremental dump is not a backup.



Hey... He was not asking for "a way to backup only the changes", he was
> asking "do any of the CVS systems allow incremental backups, where only
> items changed since a last full backup would be backed up?" The diff
> solution is your interpretation of that, it works, but for me it's not
> really the best.


Yeah, still don't understand how you can say that an incremental backup is
not the same as "backup only the changes".  Since you can dump specific
revisions, (by definition incremental), and if you want you can choose to
use the incremental tag (dumps diffs between revisions), which most likely
produces smaller output, then yeah.  You can definitely do an incremental
backup.

Just because you prefer to backup the entire repo (why?) does not mean that
backing up revisions only is not an excellent idea.  Sorry you don't think
it's best.  But then, I think backing up the entire repo over and over is
rather wasteful, especially when it has to be done remotely and you have to
pay for bandwidth (lucky, I don't).

And I didn't make a "statement that an incremental dump is not a backup", I
> said "a diff between two revisions is not really an 'incremental backup'".
> And I gave reasons for that.


Unless he's making massive changes to a file, the best you can hope to get
in terms of a change, would be a diff between revisions.  Again, I will
state that backing up revisions is not making a backup of the sandbox as you
suggested.  It's a *real* backup (if there is such a thing) of the contents
and the structure of the repo.  Do a dump, ask for specific revisions, and
you will get the exact repo structure.  Do a load, and you get the exact
repo structure back including all history.  Exact replica.  Again, the
assumptions you made, I believe, are wrong.

I don't really know what to reply; it would be a lot easier if you didn't
> misrepresent the original posters' words.


I don't believe I am misinterpreting his words.  I believe I've
"interpreted" the op just fine.  He wanted a backup, I provided a method.
The only one misinterpreting would be you constantly referring to revision
dumps coming from the sandbox, when they are in fact small pieces of the
original repository, complete with structure and history.

--


Shawn Wilton (b9 Systems)
http://black9.com

2006\01\18@011954 by John De Villiers

flavicon
face
On Mon, 2006-01-16 at 12:43, Alan B. Pearce wrote:
> One other question, do any of the CVS systems allow incremental backups,
> where only items changed since a last full backup would be backed up? This,
> to me anyway, would seem to be a sensible way of doing it without needing to
> spend money on a tape drive or whatever, to do a full backup each time.

If you really want incremental backups then i'd do it this way.
1) Checkout a copy on your backup server somewhere
2) Write a script/batch file that will to an update of that copy before
the backup begins ( cvs -z3 update -Pd )
3) Do an incremental backup of that folder ( only the files that were
updated would get backed up ).

I'd still do a full backup of the repository at least once a week. It
all depends how much your data is worth to you.

Getting a diff file from the repository is a small & quick, but its a
real pain to restore using diffs. You gotta apply all the diffs in
sequence from your last repository backup.

I dont do incrementals myself. I have another machine that is idle most
of the time and that has a huge disk. It runs a nightly update of all
cvs repositories that i have. Sure i dont have revision history, but
then i dont really need it.

2006\01\18@064437 by Gerhard Fiedler

picon face
Shawn Wilton wrote:

> Yeah, still don't understand how you can say that an incremental backup is
> not the same as "backup only the changes".  

I never said these were different. Let's maybe back up a little :) and let
me say that I may have come over too strong in previous messages, that I
never wanted to say that a diff dump in the way you described can not be
used for an incremental backup, and that depending on how svn stores data
in its repository file(s), this may be the adequate or even only way to do
it with svn. (I still may not like it :)

> Again, I will state that backing up revisions is not making a backup of
> the sandbox as you suggested.  It's a *real* backup (if there is such a
> thing) of the contents and the structure of the repo.  Do a dump, ask
> for specific revisions, and you will get the exact repo structure.  Do a
> load, and you get the exact repo structure back including all history.
> Exact replica.  Again, the assumptions you made, I believe, are wrong.

You are right in that I didn't quite understand what this revision dump
means in svn. I guess I get that now a bit better. But then, if such a dump
really contains the full data of those revisions, it's more than a
traditional diff and is possibly more than an incremental backup would
contain. A traditional incremental backup is just the changed data, and
doesn't allow you to get the full data back without the baseline backup.
Your revision dump, as you say, does that, so it could just be more data
than that incremental backup.

> you prefer to backup the entire repo (why?)

I probably said I (usually) run an /incremental/ backup over the entire
repository (and not only the repository). Every now and then (usually once
per week) I run a full backup. The reason is mostly that it's cheap, and
guaranteed. I backup to harddrives. It has the added benefit of making sure
all data can be read, and that the disk controller notices early enough
when a sector becomes marginal -- and not only after it's already gone.
Which may make me use my backups less often.

The other reason is that I do this to my whole system, not only the
repository. And here we get to my real beef with that solution: I think
that it's not about backing up this repository or that program's data, I
think it's about making backups of /everything/ as simple and workable as
possible -- so that it gets done, and so that I can go back to work as
quickly as possible in case something bad happens. It has less of a chance
to get done if I have to run a different backup process for every data set.
So I run (baseline and incremental) backups of my whole system. And I think
that's the way to go (if you don't have a sysadmin whose only work is to
admin systems) -- simple and generic. Everything's in there, and I don't
have to work out specific backup and restore procedures for every app or
server that I'm running.

> But then, I think backing up the entire repo over and over is rather
> wasteful, especially when it has to be done remotely and you have to pay
> for bandwidth (lucky, I don't).

I do that over and over (backing up my system), and I don't see me
producing much waste :)

As the revision diff dump you're talking about contains the full data to be
able to get the complete file set out of them, they are much more than the
difference between these revisions, and sending such diff dumps is
certainly not the most bandwidth conserving way to create an incremental
backup of the repository -- because with every such diff dump you seem to
send the full data necessary to restore all changed files in the change
set. This is probably much more data than a traditional diff patch. So in
this sense, it is not really "incremental", it is more (and more useful,
but also more "wasteful" :) than that.

OTOH, I'm not sure this bandwidth question is relevant in many cases. Few
people pay for bandwidth these days. And as long as we're talking about
source code repositories, few people write that much code. And if they do,
they usually can pay for the bandwidth :)

Gerhard

2006\01\18@065052 by Gerhard Fiedler

picon face
John De Villiers wrote:

> I dont do incrementals myself. I have another machine that is idle most
> of the time and that has a huge disk. It runs a nightly update of all
> cvs repositories that i have. Sure i dont have revision history, but
> then i dont really need it.

If you have a second machine with enough disk space, why don't you just
copy the repositories (the directory trees with all the RCS *,v files)
nightly? This gives you the same safety as your nightly updates, plus all
revision history, tags and commit comments. In case something bad happens
to one of your repositories, just copy the backed up RCS files back and
you're back in business.

Gerhard

2006\01\19@003418 by John De Villiers

flavicon
face
On Wed, 2006-01-18 at 13:47, Gerhard Fiedler wrote:
> If you have a second machine with enough disk space, why don't you just
> copy the repositories (the directory trees with all the RCS *,v files)
> nightly? This gives you the same safety as your nightly updates, plus all
> revision history, tags and commit comments. In case something bad happens
> to one of your repositories, just copy the backed up RCS files back and
> you're back in business.
>
> Gerhard
Because thats the way i chose to do it.
Also, svn doesnt use rcs *,v files. It uses a berkeley (sp?) database,
and i havent tried recovering a BDb yet. The correct way to back it up
would be to shut svn down first, and then copy the files.

I dont know BDb that well ( at all ) so i steered clear of it affecting
a recovery.



2006\01\19@005112 by Alex Harford

face picon face
On 1/18/06, John De Villiers <@spam@bbjRemoveMEspamEraseMEplz.co.za> wrote:
>
> I dont know BDb that well ( at all ) so i steered clear of it affecting
> a recovery.

John, I believe the recommended backend to use is FSFS now because BDB
was running into bugs when Subversion was hammering on it.

The backend type can be changed easily (svnadmin dump, create a new
repo as FSFS, svnadmin load)

Alex

2006\01\19@005200 by Shawn Wilton

picon face
You have two options for performing a recovery, both easy.  First method,
"svnadmin recover REPO", second method "db4.X_recover -h REPO".

I've had to do it a dozen times for various reasons, all of which were my
fault.

The *BEST* way to back up an SVN repo is to do a dump.  It's a complete
snapshot of the repo and its history.  However, failing that, revisional
dumps work great, and with a small shell script, you can completely automate
the process of rebuilding the repo if you should ever have to do that.  My
greatest recommendation...get RAID.


On 1/18/06, John De Villiers <EraseMEbbjspam@spam@plz.co.za> wrote:
{Quote hidden}

> -

2006\01\19@005937 by Shawn Wilton

picon face
Yeah, neither one is "recommended" per se:  <
http://svnbook.red-bean.com/en/1.1/ch05.html#svn-ch-5-sect-1.3>

Both have their adv. and disadv.



On 1/18/06, Alex Harford <@spam@harfordspam_OUTspam.....gmail.com> wrote:
{Quote hidden}

> -

More... (looser matching)
- Last day of these posts
- In 2006 , 2007 only
- Today
- New search...