Searching \ for '[OT] LINUX - UNIX - CAD for PIC ... was SuperPCB,' in subject line. ()
Make payments with PayPal - it's fast, free and secure! Help us get a faster server
FAQ page: www.piclist.com/techref/pcbs.htm?key=pcb
Search entire site for: 'LINUX - UNIX - CAD for PIC ... was SuperPCB,'.

Exact match. Not showing close matches.
PICList Thread
'[OT] LINUX - UNIX - CAD for PIC ... was SuperPCB, '
1999\03\31@104204 by Wagner Lipnharski

picon face
Well, this is something that bothers me... for sure I understand that a
different platform works in a complete different way... organization,
task control, and lots of other things, but, still running 80x86 code,
the same PC BIOS, I/O, Video, Disk, and so on, it means that basically a
"normal" DOS program should run under Unix (Linux), and even part of any
Windows program... at least the "executable" part of the code, while the
"transition" and other control areas would crash because the platform
differences...  If this is correct, it is not too difficult to convert a
pure DOS program to Unix...

Second point; If Linux (Unix) is basically an university and
"revolutionary" platform (well, not so much nowadays, but more than
Windows at least), why its CAD programs are "really expensive"? if
students can't pay for it?  Piracy invocation again?  This kind of
marketing approach I never understood.  Probably SUN producers should
give them away... ;)

Wagner

"Richard A. Smith" wrote:
{Quote hidden}

1999\03\31@111647 by Andy Kunz

flavicon
face
>Second point; If Linux (Unix) is basically an university and
>"revolutionary" platform (well, not so much nowadays, but more than
>Windows at least), why its CAD programs are "really expensive"? if
>students can't pay for it?  Piracy invocation again?  This kind of
>marketing approach I never understood.  Probably SUN producers should
>give them away... ;)

Because, Wagner, it _ISN'T_ a "university platform."  It's a genuine
commercial platform.

Andy

  \-----------------/
   \     /---\     /
    \    |   |    /          Andy Kunz
     \   /---\   /           Montana Design
/---------+   +---------\     http://www.montanadesign.com
| /  |----|___|----|  \ |
\/___|      *      |___\/     Go fast, turn right,
                              and keep the wet side down!

1999\03\31@113939 by Richard A. Smith

flavicon
face
On Wed, 31 Mar 1999 10:40:45 -0500, Wagner Lipnharski
wrote:

>"normal" DOS program should run under Unix (Linux), and even part of any
>Windows program... at least the "executable" part of the code, while the
>"transition" and other control areas would crash because the platform
>differences...  If this is correct, it is not too difficult to convert a
>pure DOS program to Unix...

Well you are forgetting about the major addressing
difference between real mode and protected mode.  It's a
nontrivial task to switch between them even with the
virtual real modes of the pentiums.

DOS EMU (the dos emulator) is supposed to be really good
though at running DOS programs under linux.

>Second point; If Linux (Unix) is basically an university and
>"revolutionary" platform (well, not so much nowadays, but more than
>Windows at least), why its CAD programs are "really expensive"? if
>students can't pay for it?

The difference is that the CADS I am refering to are the in
a different class of CAD.  They are huge packages that
include simulation, IC design, VHDL, and stuff like that.
Stuff that companies like Intel use.  Mentor graphics
donated a bunch of seats to my alma matter but I think the
maintaince contract for them is in the $100k reigon.
However it allows a student to totally design and simulate
a microprocessor circuit.  You assemble your code  and load
the .obj into into the simulator as well.  The simulator
will then simulate your circuit while running your uP code!


However I see an announcement that Eagle is now available
for Linux so I gues low cost CAD is finally comming to the
UNIX world.



--
Richard A. Smith                         Bitworks, Inc.
.....rsmithKILLspamspam@spam@bitworks.com               501.521.3908
Sr. Design Engineer        http://www.bitworks.com

1999\03\31@211414 by Bob Drzyzgula

flavicon
face
> Second point; If Linux (Unix) is basically an university and
> "revolutionary" platform (well, not so much nowadays, but more than
> Windows at least), why its CAD programs are "really expensive"? if
> students can't pay for it?  Piracy invocation again?  This kind of
> marketing approach I never understood.  Probably SUN producers should
> give them away... ;)
>
> Wagner

- Before I decend into a rant, I will point out that
- both Avant! and Synopsis have announced Linux ports.
- The chip-level EDA guys have only recently started
- to use Windows in any big way, so their code base is
- a lot more ready for Linux and their customer base
- is heavily populated with Unix and Linux bigots. Thus,
- they are quite a bit ahead of the Board-level EDA
- guys.

Hmmm. Linux is an operating system, DOS is an operating
system, Windows NT is an operating system. All of them
serve the basic functions required to boot a machine
and load programs.

There are a number of things that scare vendors off
from developing for Linux:

 * Since the Linux OS source code is avaliable to
   anyone for free, there has been some concern over the
   ability of these companies to support their programs
   running on Linux. (Although many of us will let out
   a big guffaw over this one -- in many such cases, it
   wasn't like they were "supporting" their product on
   other operating systems :-)  What they fear is that
   they will have to sink all kinds of tech support time
   into trying to fix problems that only exist because
   someone "hacked" their kernel. Slowly, companies are
   finding that they can simply insist that stations run
   a specific distribution (e.g.  RedHat 5.2) or specific
   kernel and libraries (e.g. Linux 2.0.36 with libc6
   release greater than xxxx).

 * They believe that the "port" to Linux will be
   difficult; they belive that there must be all sort
   of idiosyncratic cruft floating around in Linux
   that will wind up creating a huge sinkhole for their
   developers. However, the experience of most companies
   that try to port Unix software to Linux is that it is
   the easiest port they ever have done. When a senior
   executive in Informix's development side was asked what
   needed to be done to port their software to Linux,
   he reported "We typed 'make'." That was it -- they
   loaded the source code onto a Linux machine, did a
   compile, bombarded it with tests, put it out for beta,
   and never had to change a thing. It just worked. As
   stories like this get around, vendors start to loose
   their reluctance. Windows software is another matter...

 * Many companies have trouble believing that
   people who don't want to pay eighty bucks for an
   operating system are ever going to want to pay for
   applications software. They look at open source
   programmers busily trying to re-implement everything
   from math libraries to transaction processing monitors,
   and it scares them -- no one want to be in the position
   of having to compete with their own customers, who
   are busily swapping code for free. They also look at
   sites like Slashdot (http://slashdot.org) and see
   the vehemently anti-proprietary whining that goes
   on and think that those are typical Linux users. But
   increasingly, vendors who have dipped their toe in the
   water find that the water is just fine; they just go
   ahead and climb up to the 3-meter board for a dive
   into the deep end. Often vendors start out porting
   their low-end stuff to Linux as a token "hey, we do
   Linux" move, and then come back a couple of months
   later committing to port their whole product line. What
   they didn't understand is that Linux users often use
   Linux simply because it is a better OS, not because
   they are cheap, and they often were re-implementing
   all those services simply because the ISVs refused to
   port their products to Linux. The receptiveness to
   commercial software among most Linux users has been
   a huge and pleasent surprise to many software houses.

 * In many cases, ISVs had already sold their soul
   to Microsoft long before Linux ever became seen as
   a viable option. Autodesk, for example, has totally
   dumped all of their Unix code and now develop
   exculsively for the Win32 platform. Dis-entangling
   AutoCAD from the Microsoft Foundation Classes would
   be an absolute nightmare. So Autodesk doesn't even
   really discuss the issue. Any question anybody
   asks Autodesk about Linux gets a "Nope, not doing it"
   response. Hell, they had already screwed all their
   Unix customers years ago, why should they worry abuot
   a few new Unix bigots? But deep down, I can certainly
   imagine them beginning to panic -- if Linux really
   takes off like many believe it will, if this Microsoft
   backlash grows, then Autodesk could be in really
   bad shape. A Linux port of code that totally
   entangled with Win32 could be a huge undertaking,
   and they are years behind the curve; some other
   CAD vendor with a decent, current Unix (or generic)
   source base could have a huge jump on them. That
   is, unless they have a porting effort already
   going on that they're just not talking about; this
   would not be unprecedented in the history of Linux
   ports.

 * They don't know how to deal with a product that has
   no all-powerful central source. Linus dosen't
   count here, he is totally uncorruptable and totally
   uninterested in building "relationships". Linus works
   on Linux because he has a ball doing it; Linus has
   steadfastly refused to profit from Linux because --
   amazingly -- he fully understands that if he were to
   do so it would break the spell.  This is where vendors
   like RedHat and Caldera come in. RedHat is a company
   that will send out someone to shake your hand over a
   deal. RedHat will accept venture capital. RedHat will
   make promises and follow through. Caldera likewise
   has a very specific worldview and will do what it
   takes to achieve that. They will buy technology, they
   will license commercial software to grow the base OS
   (something that RedHat is lothe to do), and they will
   dump piles of money into development.  Caldera, for
   example, implemented System V Streams in the Linux
   kernel (no small task, believe me) simply so that
   they could get Novell's portable Netware code to run
   on Linux. Caldera now sells a full-blown commercial
   Netware server, using Novell's code and with full NDS
   server capability. It ain't cheap (it is free for a
   couple of users, but you start paying per-user fees
   above that), but it is Linux, and since that streams
   code is in the kernel, it is in the kernel and it is
   free and anyone can use it. This kind of thing never
   could have happened without a company like Caldera who
   is willing to take the risk on this kind of project,
   and Novell would never have messed around with
   something like this if it were not for Caldera (and
   the fact that Caldera is largely owned by Novell's
   ex-CEO; ah, to have money to burn and an axe to grind.)

 * Many vendors simply have bad management. A great
   deal of survival in the software marketplace over the
   past few years has been tactics, and most managers are
   tacticians. Good, visionary strategists, people who can
   "think different", are few and far between. Partly,
   I think, this has been due to Microsoft's hegemony.
   Microsoft has had the only really sucessful strategy,
   and as long as Microsoft continues on this roll,
   a good tactician could simply hitch themselves to
   Microsoft's coattails and look like a genius. In this
   scheme, independant thought and vision could be a
   real drawback. So you wound up with a lot of managers
   who were like Dilbert's boss, who can barely plan his
   way out of a toilet stall.  Software houses with this
   problem won't do a Linux port simply because it
   isn't the thing that worked before.

> Well, this is something that bothers me... for sure I understand that a
> different platform works in a complete different way... organization,
> task control, and lots of other things, but, still running 80x86 code,
> the same PC BIOS, I/O, Video, Disk, and so on, it means that basically a

...

If PCs were like PICs or other MCUs being used without
benefit of an RTOS, then this would be true. But a PC is
a pretty darned complex beast to run without any sort of
operating system at all, even if it is only a glorified
program loader like MS-DOS. For all its grand disfunction,
DOS does in fact provide a number of system services to
loaded programs, and those services are not statically
linked into the DOS executable; rather they are handled
by the DOS kernel code that loads itself into memory and
waits to be called... DOS is sort of like the granddaddy of
all TSRs.  Most modern DOS programs (if there *are* any DOS
programs that can be viewed as "modern") also rely on other
TSRs that are commonly found on DOS systems -- SMARTDRV,
for example. In addition, unless you can honest-to-goodness
find a way to fit your program and all services into 1MB
(640KB + I/O), then you need some way to use "extended" or
"expanded" memory, which means a "32-bit" or "protected
mode" extender.  Again, none of this stuff is statically
linked into a DOS executable, so you really can't run most
DOS programs without there being something there to provide
DOS services, which then means reverse-engineering all that
obfuscated crap that Microsoft sold for too many years.

Under Linux, DOSEMU provides a virtual machine
implementation that provides a generally adequate (and
getting much better of late) platform on which one can run
a DOS implementation, of which there are currently several:
MS-DOS, of course, as well as (now) Caldera's DR-DOS 7.02
(7.03 is in beta), IBM's PC-DOS 7 (which is still selling
in COMPUSA last I checked), and the freeware FreeDOS. There
are of course others such as Datalight's ROMDOS and the
Micro/SYS RUN.EXE, but other than the first big four (MS,
Caldera, IBM and FreeDOS), most other implementations are
pretty special-purpose. Redhat ships a FreeDOS-based hard
disk image for DOSEMU to boot from, while Caldera (surpise)
provides a DR-DOS boot image in OpenLinux (if you have
another brand of Linux, you can get a DR-DOS boot image
for DOSEMU by download from Caldera's website). Generally,
Caldera's works better (again, not a surprise since major
parts of DR-DOS predate even MS-DOS) and one can fairly
reliably run most DOS programs using that system. But there
still are major shortcomings, and generally if you want to
run DOS programs you are will be best off running a native
DOS, or OS/2 (OS/2 truely *is* a better DOS than DOS).

Hope this helps.

--Bob

--
============================================================
Bob Drzyzgula                             It's not a problem
bobspamKILLspamdrzyzgula.org                until something bad happens
============================================================


'[OT] LINUX - UNIX - CAD for PIC ... was SuperPCB, '
1999\04\02@030941 by Dr. Imre Bartfai
flavicon
face
On Wed, 31 Mar 1999, Wagner Lipnharski wrote:

> Well, this is something that bothers me... for sure I understand that a
> different platform works in a complete different way... organization,
> task control, and lots of other things, but, still running 80x86 code,
> the same PC BIOS, I/O, Video, Disk, and so on, it means that basically a
          ^^^^^^^  FALSE!
IMHO Linux avoids the whole BIOS and all their device drivers operate on
hardware (i. e. port) level.

> "normal" DOS program should run under Unix (Linux), and even part of any
> Windows program... at least the "executable" part of the code, while the
> "transition" and other control areas would crash because the platform
> differences...  If this is correct, it is not too difficult to convert a
> pure DOS program to Unix...
[ snip]

Imre

1999\04\02@031356 by Dr. Imre Bartfai

flavicon
face
Hi,
I disagree the statement "Linux ... is a genuine commercial platform". The
kernel, the operating system and all things around the OpSYS is free (see
GPL!). I think this statement should be revised before said. There is no
argument to say such. Sorry.

Imre


On Wed, 31 Mar 1999, Andy Kunz wrote:

{Quote hidden}

1999\04\02@094526 by Andy Kunz

flavicon
face
I didn't mean commercial in the "you buy it" sense but in the "you can buy
applications which use it" sense.

Also, that was a private reply to you, not a public one, as I recall.

Andy


At 09:14 AM 4/2/99 +0200, you wrote:
{Quote hidden}

1999\04\02@101350 by John Griessen

flavicon
face
Linux seems to be a for profit model of doing business.  The profit just
comes after you give away a large chunk of enabling product so you can sell
refinements to it.  There's lots of value in that.  I've heard that recently
people like EE's and such have not had a hard time installing and using
Linux.  It's also got much more up time than windows NT or win 98 when you
run 5 or so programs (engineering programs included) at once on a machine.
Searching for Linux on the web, I found some reviews that claim one week is
the hit you take for being a newbie at it.  i might take that hit to get the
uptime in the near future...especially if I identify some particular CAD SW
to use or set up a network of three computers.

John G  Austin TX

> -----Original Message-----
Because, Wagner, it _ISN'T_ a "university platform."  It's a genuine
> commercial platform.
>
> Andy

1999\04\02@115812 by Wagner Lipnharski

picon face
I understand that "commercial" means something that you do business
with, and there is no business without any kind of "reimbursement" or
"exchange" involved, so, you could say that the programs that run over
Linux are commercial, not the platform itself.  One can say that Linux
is platform where "commercial software" run.  As far I understand, Linux
was born as a small subset, and it got its place in the sun... not jokes
here (SUN SYSTEMS), with a very nice operation stability it promises a
nice growing.

"Dr. Imre Bartfai" wrote:
>
> Hi,
> I disagree the statement "Linux ... is a genuine commercial platform". The
> kernel, the operating system and all things around the OpSYS is free (see
> GPL!). I think this statement should be revised before said. There is no
> argument to say such. Sorry.

1999\04\02@142224 by Andy Kunz

flavicon
face
And if you read my statement, as quoted by you below, you see that it is a
platform on which one can make money.  It is not stricly an academic thing
(ie, cheap for students) or a learning tool.

People can run their businesses on it (as users), around it (as
developers), or in it (as Caldera and Red Hat do).  Therefore, it is a
commercially viable alternative to Redmond's crap.


At 11:57 AM 4/2/99 -0500, you wrote:
{Quote hidden}

1999\04\02@144710 by Wagner Lipnharski

picon face
Owa, owa, as far I understand since the PC was born in 80's is that the
different hardware configurations were possible since the common
interface in software, named "BIOS" does the compatibility between
different hardware and software, through a common structured system.
Video cards just =need= their own BIOS to deal with different hardware
configurations and processors, how Linux understand how to deal with
that?  I use to program in assembler, and in physical hardware
addressing when speed is required, but it is tooooooo risky, mainly for
video.

So, if Linux can deal with deep 24 bits video cards without use their
BIOS I really need to applause Linux development people... how many
video cards we have with different hardware we have actually in the
market???



"Dr. Imre Bartfai" wrote:
{Quote hidden}

1999\04\02@164955 by Bob Drzyzgula

flavicon
face
On Fri, Apr 02, 1999 at 11:48:34AM -0500, Wagner Lipnharski wrote:
> Owa, owa, as far I understand since the PC was born in 80's is that the
> different hardware configurations were possible since the common
> interface in software, named "BIOS" does the compatibility between
> different hardware and software, through a common structured system.
> Video cards just =need= their own BIOS to deal with different hardware
> configurations and processors, how Linux understand how to deal with
> that?  I use to program in assembler, and in physical hardware
> addressing when speed is required, but it is tooooooo risky, mainly for
> video.
>
> So, if Linux can deal with deep 24 bits video cards without use their
> BIOS I really need to applause Linux development people... how many
> video cards we have with different hardware we have actually in the
> market???

Most (PC) video cards these days implement the basic VGA
command set, with specific functions living at specific
addresses in memory. The PC's BIOS (Award, AMI, Pheonix,
etc) knows what all those functions are and how to call
them. Beyond this generic base VGA functionality, video
cards are as different from one another as night is from
day. Video card (and video chip) manufacturers vary wildly
in how they view the programming information one needs
in order to get good performance from their cards. Some,
like S3, have been very forthcoming with information about
their cards, while others, like 3Dlabs, have historically
been reluctant to release this information. Rather, 3Dlabs
and some other manufacturers have thought it sufficent to
release Windows drivers for their chips and tell users of
other operating systems that their cards are sold only for
the purpose of running Windows. This vile attitude has
resulted in great difficulty in writing decent drivers
for other operating systems, and great hostility toward
those companies by Linux enthusiests.

Linux, at its core, generally only knows how to use a
video card in one of the VGA text modes on a PC. Remember,
Linux has been ported not just to the x86 platform, but
also to the PowerPC, SPARC, Alpha, MIPS, StrongARM and
the Motorola 680x0. It runs on platforms as diverse as the
3Com PalmPilot (which uses the Motorola Dragonball chip,
which is in turn based on the M68K), 30-processor Sun
UltraEnterprise servers (SPARC), X86 desktop PCs, the iMac
(PowerPC), The Cobolt Qube Microserver (MIPS), and MV68K
VME systems. Thus, Linux has to be quite flexible in what
to expect out of its "graphics card".

For most of its history (30 years), the Unix OS kernel
has typically been kept separate from the graphics
display. While the most commonly-used graphics system has
been, since the late '80s, the X Window System (which was
developed at MIT and other Universities), there have been
others, including Sun's "SunWindows" and "NeWS", as well
as the "MGR" system from Bellcore. Some systems have been
hybrids between X and various add-on functions, and there
have been several experiemental graphics systems that were
done totally independantly from X. Linux, for example,
has an alternative graphics interface called "svgalib",
which is used most often by game programs such as Doom
and Quake; it is as close as you can get to PC graphics
console programming under Linux. Lately, Linux developers
have been working on a uniform graphics interface that
could provide base-level services that would be usable
both by the text console programs as well as X.

The way that X works is that a single process, called
the X "server", runs on a display device, handling all
interaction with the graphics screen. The X server knows
how to draw boxes and shading and to move things about
in 24-bit color with overlapping windows and such. It
can, for example, be told to save the information in an
obscured window so that the exposure of a window can happen
instantaneously, or it can be told to toss anything that
has been obscured in order to save memory at the expense
of performance.

Each  X server is written to be able to manage a particular
type of display. Thus, in Linux, one needs to pick from
among the S3 server, the ATI server, the SVGA server
(which is optomized for a number of low-volume chipsets),
the monochrome server, and so on. This is actually one
of the hardest parts of setting up Linux (although sound
cards are even harder) and one often will have to run to
the card manual to find out the supported timings, dot
clocks, scan rates, etc. (some of which are properties
of the card and some of which are properties of the
monitor). The configuration software gets better all
the time, though, and of late it has been pretty nearly
idiot-proof for common chipsets like S3.  Most of the
work on producing these servers has been done by a group
of interested parties which calls themselves the XFree86
Consortium (http://www.xfree86.org). Despite the name,
it should be pointed out that X itself is free, so that
what they are emphasising as free is the integration
of X with various operating systems (such as Linux and
FreeBSD) and graphics devices. It should also be noted
that at this point, XFree86 runs on pretty much the same
set of platforms as does Unix, so the "86" in the name
is a historical artifact. Also, I should point out that
there are a few commercial X server vendors, such as Xi
Graphics and Metro-X. Metro-X systems have been certified
for use in military systems.

Programs need to display on a graphics device need to use
the X windows client API, which is extensively documented
and the complexity of which is probably overshadowed only
by that of the Java 2 API. The "client" programs use the
X protocol to pass instructions to the X server. The
X protocol is layered on TCP/IP, so it matters not at
all if the client program is running locally or over the
network -- programs always talk to the display in the same
manner no matter where they run. This is in fact one of
the things most sorely-missed by Unix emigres to Windows
-- it is *extremely* frustrating to learn that one cannot
naturally display the graphical output of a program running
on a server on one's desktop workstation.

X is an unbeliveably flexible thing. In contrast to the
Win32 API, X can look like almost anything, and support
all most any user interface. This has in the past made it
a vast and wonderful playground for graphics programmers.
But it has also made it hell for users, who often have
to confront a half-dozen user interfaces all warring
with one another on a single display. There has been
a great deal of work on this over the past couple of
years, and there are now only three or so interfaces that
are being actively worked on, and even those groups are
cooperating to an unprecedented degree. You may have heard
of GNOME (http://www.gnome.org), the GNU Network Object
Model Environment, which is heavily based on the GTK+
graphical toolkit (http://www.gtk.org) that was first
developed for the GIMP (Gnu Image Manipulation Program)
(http://www.gimp.org) project. GTK+ actually is an
extremely powerful and straigthforward system and is
growing in popularity -- the Unix port of the Mozilla
(http://www.mozilla.org) source code is being based
on GTK+. Also KDE (http://www.kde.org), the K Desktop
Environment (nee the Kool Desktop Environment), is based
on the half-commercial/half-opensource QT toolkit from
the Norwegen company Troll Tech (http://www.troll.no).
Both the GNOME and the KDE projects are using a CORBA
(http://www.corba.org -- are we seeing a pattern here?)
object model to support Windows-style drag-and-drop and
object embedding, only in the case of the Linux stuff,
it will all be totally transparant from a network level;
you'll be able to drag a file icon from an application
on one computer to an application on another computer
with total transparancy -- Windows can only sort of do
this -- you can drag files around from server to server,
for example, but all the real work has to happen within
a single box.

But to answer your question, Wagner: yes. Linux (and X)
has abstracted the interface to the graphics device to such
a great extent that one should never need to hand-craft
assembly code to get a high-performance 24-bit graphical
display on one's PC. All that hard work has already
been done for you, and it even works cross-platform and
cross-network.  There are *no issues*, for example, in
displaying the 24-bit graphical output from a SPARCstation
running Solaris on an X86 PC running Linux. I do it all
the time, day in and day out, at work.

--Bob

--
============================================================
Bob Drzyzgula                             It's not a problem
.....bobKILLspamspam.....drzyzgula.org                until something bad happens
============================================================

1999\04\03@075052 by Dr. Imre Bartfai

flavicon
face
Hi,

in my statement I meant the motherboard's BIOS, of course. However, AFAIK,
the different LINUX servers (this is the way how LINUX deals with
different video cards) serve the VGA chip really on h/w base, i. e. they
does not use the Video BIOS of the card. There are a lot of (I recall,
about 300) different configurations, cards supported by the about 10
different X servers. Really, I need to specify the RAMDAC type, the
ClockChip type, the VGA chip, the different frequencies of the monitor
etc. etc. if I set up a particular X server on a particular machine. And,
suprisingly, it IS stabil and not risky. Maybe I am in error, but I am so
informed and if you want to know more, look at http://www.linux.org

Regards,
Imre


On Fri, 2 Apr 1999, Wagner Lipnharski wrote:

{Quote hidden}

More... (looser matching)
- Last day of these posts
- In 1999 , 2000 only
- Today
- New search...