Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 146150

Article: 146150
Subject: Re: using an FPGA to emulate a vintage computer
From: Charles Richmond <frizzle@tx.rr.com>
Date: Sat, 06 Mar 2010 18:51:26 -0600
Links: << >>  << T >>  << A >>
Quadibloc wrote:
> On Mar 5, 12:44 pm, Joe Pfeiffer <pfeif...@cs.nmsu.edu> wrote:
>> Quadibloc <jsav...@ecn.ab.ca> writes:
>>> On Feb 26, 4:56 am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
>>>>         No, he's saying that C doesn't really implement an array type, the
>>>> var[offset] syntax is just syntactic sugar for *(var + offset) which is why
>>>> things like 3[x] work the same as x[3] in C.
>>> Um, no.
>>> x = y + 3 ;
>>> in a C program will _not_ store in x the value of y plus the contents
>>> of memory location 3.
>>> On a big-endian machine,
>>> long int x[5] ;
>>> x[0] = 3 ;
>>> x[1] = 12 ;
>>> y = x[0] ;
>>> or, on a little-endian machine,
>>> long int x[5] ;
>>> x[1] = 3 ;
>>> x[0] = 12 ;
>>> y = x[1] ;
>>> will not result in zero being stored in y, since a long int variable
>>> occupies more than one byte in storage, and hence the two assignments
>>> are being made to overlapping variables.
>>> Yes, C doesn't do _bounds checking_, but that is a far cry from
>>> "syntactic sugar for variable plus address offset".
>> I'm not quite sure what the point of your example is, somebody who is
>> better at programming languages than me would have to evaluate the claim
>> that C arrays aren't arrays.  But:
>>
>> #include <stdio.h>
>> int main()
>> {
>>     int a[4];
>>
>>     printf("a[2] at 0x%8x\n", &(a[2]));
>>     printf("2[a] at 0x%8x\n", &(2[a]));
>>     printf("(a+2) is 0x%8x\n", a+2);
>>     printf("(2+a) is 0x%8x\n", 2+a);
>>
>> }
>>
>> [pfeiffer@snowball ~/temp]# ./awry
>> a[2] at 0xbfff97b8
>> 2[a] at 0xbfff97b8
>> (a+2) is 0xbfff97b8
>> (2+a) is 0xbfff97b8
> 
> The 2[a] syntax actually *works* in C the way it was described? I am
> astonished. I would expect it to yield the contents of the memory
> location a+&2 assuming that &2 can be persuaded to yield up the
> location where the value of the constant "2" is stored.
> 
> Evidently there is some discrepancy between C and FORTRAN.
> 
> John Savard

Yes, "2[c]" does work in C as well as "c[2]", and yields the same 
results. The definition of "c[x]" is "*(c+x)", where the array "c" 
becomes a pointer to the first element, and the integer value "x" 
is scaled by the length associated with the pointer "c". "*(c+x)" 
will give the same result as "*(x+c)", so it's logical.

-- 
+----------------------------------------+
|     Charles and Francis Richmond       |
|                                        |
|  plano dot net at aquaporin4 dot com   |
+----------------------------------------+

Article: 146151
Subject: Re: using an FPGA to emulate a vintage computer
From: Charles Richmond <frizzle@tx.rr.com>
Date: Sat, 06 Mar 2010 18:54:16 -0600
Links: << >>  << T >>  << A >>
Walter Bushell wrote:
> In article <20100305171635.e538ef18.steveo@eircom.net>,
>  Ahem A Rivet's Shot <steveo@eircom.net> wrote:
> 
>> On Fri, 5 Mar 2010 09:07:31 -0800 (PST)
>> Quadibloc <jsavard@ecn.ab.ca> wrote:
>>
>>> On Feb 26, 4:56Â am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
>>>
>>>> Â  Â  Â  Â  No, he's saying that C doesn't really implement an array type,
>>>> the var[offset] syntax is just syntactic sugar for *(var + offset)
>>>> which is why things like 3[x] work the same as x[3] in C.
>>> Um, no.
>>>
>>> x = y + 3 ;
>>>
>>> in a C program will _not_ store in x the value of y plus the contents
>>> of memory location 3.
>> 	No but x = *(y + 3) will store in x the contents of the memory
>> location at 3 + the value of y just as x = y[3] will and x = 3[y] will,
>> which is what I stated. You missed out the all important * and ()s.
> 
> No, that will compare x and the right val.
> 
> = is a comparasion operator in c.
> 
  "==" is a comparison operator in c.

-- 
+----------------------------------------+
|     Charles and Francis Richmond       |
|                                        |
|  plano dot net at aquaporin4 dot com   |
+----------------------------------------+

Article: 146152
Subject: Re: using an FPGA to emulate a vintage computer
From: Charles Richmond <frizzle@tx.rr.com>
Date: Sat, 06 Mar 2010 18:55:56 -0600
Links: << >>  << T >>  << A >>
Peter Flass wrote:
> Walter Bushell wrote:
>> In article <20100305171635.e538ef18.steveo@eircom.net>,
>>  Ahem A Rivet's Shot <steveo@eircom.net> wrote:
>>
>>> On Fri, 5 Mar 2010 09:07:31 -0800 (PST)
>>> Quadibloc <jsavard@ecn.ab.ca> wrote:
>>>
>>>> On Feb 26, 4:56Â am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
>>>>
>>>>> Â  Â  Â  Â  No, he's saying that C doesn't really implement an 
>>>>> array type,
>>>>> the var[offset] syntax is just syntactic sugar for *(var + offset)
>>>>> which is why things like 3[x] work the same as x[3] in C.
>>>> Um, no.
>>>>
>>>> x = y + 3 ;
>>>>
>>>> in a C program will _not_ store in x the value of y plus the contents
>>>> of memory location 3.
>>>     No but x = *(y + 3) will store in x the contents of the memory
>>> location at 3 + the value of y just as x = y[3] will and x = 3[y] will,
>>> which is what I stated. You missed out the all important * and ()s.
>>
>> No, that will compare x and the right val.
>>
>> = is a comparasion operator in c.
>>
> 
> '=' is assignment, '==' is comparison.

I think there is *not* a single C programmer who has *not* had his 
hand slapped by making the mistake of using "=" when he meant 
"==". Thus the avalanche of replies...   :-)

-- 
+----------------------------------------+
|     Charles and Francis Richmond       |
|                                        |
|  plano dot net at aquaporin4 dot com   |
+----------------------------------------+

Article: 146153
Subject: Re: Modelsim PE vs. Aldec Active-HDL (PE)
From: KJ <kkjennings@sbcglobal.net>
Date: Sat, 6 Mar 2010 17:26:24 -0800 (PST)
Links: << >>  << T >>  << A >>
> > What do you gain by trying to have tidy intermediate folders?
>
> as you said, tidiness...
>

tidy intermediate folders...i.e. folders that are not important to me
as the user of the tool, but are needed by the tool to do it's job.
In other words, I don't care if the tool's private folders are tidy or
not.

> I use separate libraries for major categories within the design; e.g. memory
> interface, core logic, common (reusable) blocks, testbench - not separate
> libraries for foo, bar and bletch.
>

My point was why even bother to separate them unless there are name
clashes...or perhaps you're creating your own separate IP for resale
and want to avoid clashes with some other potential IP.

> I can't say it buys me a whole lot

I agree

> but it does help me keep the design hierarchy
> straighter - e.g. if the synthesis project contains something from the Testbench
> library, the design has gone seriously astray somewhere!
>

If it's actually a problem, the synthesis tool will complain quickly
(like less than 1 minute into the run)...but the synthesis tool won't
be looking at any libraries (testbench or other) it will create the
libraries itself based on the source files you tell it are in there to
be synthesized.  Whether you compile such a testbench file into 'work'
or 'testbench' won't matter.  If the source file is included it will
be analyzed.  If it happens to be synthesizable code (even if it is
only intended for sim testbench) synthesis will be OK with it.  It
won't generate any logic from this extraneous code since it won't be
called from within the hierarchy of the design to be synthesized.

I confess though, I'm not quite sure what your point is here for
compiling stuff into separate libraries.  It *sounds* like you're
talking about organizing source files into separate 'libraries'...in
which case what you said would make more sense but that's not at all
the same thing as compiling something into a library other than
'work'.

Kevin Jennings

Article: 146154
Subject: Re: using an FPGA to emulate a vintage computer
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Sun, 7 Mar 2010 02:13:04 +0000 (UTC)
Links: << >>  << T >>  << A >>
In comp.arch.fpga Rick <richardcortese@gmail.com> wrote:
(snip)
 
> One of the other old processors from my stone knives and bear skin
> days was the RCA1802. It had IMHO a great feature for the calling
> subroutines. Anyone of the 16 general purpose registers could be made
> the program counter with a single instruction.

The OS/360 (and successor) calling mechanism isn't so different.

The BALR instruction branches to the address in a specified
register, while storing the address of the next instruction
in a register.  It is even allowed for both registers to be 
the same!  I have seen that used for coroutines, where an
appropriate BALR switches between routines using only one 
register to store the address in the other routine.

--glen

Article: 146155
Subject: Re: using an FPGA to emulate a vintage computer
From: Joe Pfeiffer <pfeiffer@cs.nmsu.edu>
Date: Sat, 06 Mar 2010 19:57:39 -0700
Links: << >>  << T >>  << A >>
Quadibloc <jsavard@ecn.ab.ca> writes:
>
> The 2[a] syntax actually *works* in C the way it was described? I am
> astonished. I would expect it to yield the contents of the memory
> location a+&2 assuming that &2 can be persuaded to yield up the
> location where the value of the constant "2" is stored.

I don't have my copy handy, but I think it was documented that way back
in the original C language Bell Labs tech report.
-- 
As we enjoy great advantages from the inventions of others, we should
be glad of an opportunity to serve others by any invention of ours;
and this we should do freely and generously. (Benjamin Franklin)

Article: 146156
Subject: Re: using an FPGA to emulate a vintage computer
From: Patrick Scheible <kkt@zipcon.net>
Date: 06 Mar 2010 19:08:50 -0800
Links: << >>  << T >>  << A >>
Charles Richmond <frizzle@tx.rr.com> writes:

> Peter Flass wrote:
> > Walter Bushell wrote:
> >> In article <20100305171635.e538ef18.steveo@eircom.net>,
> >>  Ahem A Rivet's Shot <steveo@eircom.net> wrote:
> >>
> >>> On Fri, 5 Mar 2010 09:07:31 -0800 (PST)
> >>> Quadibloc <jsavard@ecn.ab.ca> wrote:
> >>>
> >>>> On Feb 26, 4:56Â am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
> >>>>
> >>>>> Â  Â  Â  Â  No, he's saying that C doesn't really implement an 
> >>>>> array type,
> >>>>> the var[offset] syntax is just syntactic sugar for *(var + offset)
> >>>>> which is why things like 3[x] work the same as x[3] in C.
> >>>> Um, no.
> >>>>
> >>>> x = y + 3 ;
> >>>>
> >>>> in a C program will _not_ store in x the value of y plus the contents
> >>>> of memory location 3.
> >>>     No but x = *(y + 3) will store in x the contents of the memory
> >>> location at 3 + the value of y just as x = y[3] will and x = 3[y] will,
> >>> which is what I stated. You missed out the all important * and ()s.
> >>
> >> No, that will compare x and the right val.
> >>
> >> = is a comparasion operator in c.
> >>
> > 
> > '=' is assignment, '==' is comparison.
> 
> I think there is *not* a single C programmer who has *not* had his 
> hand slapped by making the mistake of using "=" when he meant 
> "==". 

More than once...

-- Patrick

Article: 146157
Subject: Re: using an FPGA to emulate a vintage computer
From: "Dennis Ritchie" <dmr@bell-labs.com>
Date: Sun, 7 Mar 2010 04:33:43 -0000
Links: << >>  << T >>  << A >>

"Quadibloc" <jsavard@ecn.ab.ca> wrote in message 
news:91a047d3-ee98-40b6-876e-9a7221168d5b@33g2000yqj.googlegroups.com...

> The 2[a] syntax actually *works* in C the way it was described? I am
> astonished. I would expect it to yield the contents of the memory
> location a+&2 assuming that &2 can be persuaded to yield up the
> location where the value of the constant "2" is stored.

> Evidently there is some discrepancy between C and FORTRAN.

Yes, it really does work as described, and was indeed documented
in the earliest C, and for that matter B and BCPL manuals.
C and Fortran are discrepant here.

  Dennis



Article: 146158
Subject: Re: Actel is now the only FPGA vendor with hard-core processor in the
From: radarman <jshamlet@gmail.com>
Date: Sat, 6 Mar 2010 21:12:02 -0800 (PST)
Links: << >>  << T >>  << A >>
On Mar 5, 11:51=A0am, rickman <gnu...@gmail.com> wrote:
> On Mar 4, 3:14 pm, Andy Peters <goo...@latke.net> wrote:
>
> > On Mar 4, 11:56 am, Antti <antti.luk...@googlemail.com> wrote:
>
> > > as Xilinx has dropped hard processor IP in the latest families it
> > > makes ACTEL the only FPGA vendor whos latest product family does have
> > > hard processor IP.
>
> > Putting a processor inside an FPGA has proven to us to be a bigger
> > PITA than it's worth.
>
> Who exactly is "us"? =A0Are you with Xilinx, Actel or someone else?
>
> > Consider than instead of V4FX, you can use an S3AN and a standalone
> > PPC and you'll pay a whole lot less. Plus the various Freescale PPCs
> > have DDR memory and Ethernet and DMA controllers that don't suck, and
> > you're not stuck with crappy tools.
>
> > Embedding the processor in the FPGA is an interesting idea, but as
> > long as Brand X seems to think that the only people who do are the
> > types who want to run Linux on an FPGA, it's gonna suck for actual
> > embedded use.
>
> The big problem seems to be the problem of too many size combinations
> to be practical. =A0Mixing FPGA, CPU, SRAM and Flash on one chip can't
> be all things to all people. =A0Still, I think there are a few sweet
> spots that can be profitable. =A0A smallish FPGA (~3000 to 5000 LUT4s)
> combined with a CM3 in the typical MCU memory combinations is a
> powerful device, especially if it is available with a decent dual
> CODEC in a package on the smaller side (100 pins or less). =A0The cost
> of low end FPGAs seem to be I/O count driven, so a 5000 LUT4 FPGA can
> likely be combined with a CM3 (or maybe a CM0), etc, without
> increasing the cost significantly. =A0I am paying $10 for a 3000 LUT4
> FPGA in a 100 TQFP along with a $3 stereo CODEC. =A0If I could get an
> MCU in that package with even just 64 kB of Flash it would allow a
> very space constrained product to really have some capabilities... if
> they could keep the cost down to $15 at qty 100.
>
> Other than Cypress, no one else seems to see this as a viable market.
> I'm still waiting to see if the Cypress PSOC5 is all its cracked up to
> be. =A0The Actel parts are way too expensive from what I've heard and I
> think the PSOC5 will be too rich as well.
>
> Rick

The PSoC5 looks interesting, but the EMIF interface is pretty limited.
The external memory bus is restricted to a 32MHz clock, due to the pad
I/O apparently, and is hardwired to require two clock cycles per
access. This might be livable if the EMIF core had a ZBT or FIFO
option, but everything I've seen so far indicates that it only
supports traditional async and sync SRAM. The only saving grace is
that it can be 16-bits wide, giving you 32MB/s to an external host.

32MB/s is nothing to sneeze at for a lot of designs, but it would have
been nice to see them find a clever way to double that - even if it
meant DDR techniques.

I'm actually looking at putting together a little breadboard with a
PSoC5 and a Cyclone III to play around with, and see what kind of
sustained data rates can be achieved - once Cypress gets around to
sampling them and updates Creator with the EMIF core. I'll probably
emulate a sync SRAM in the FPGA, and memory map the I/O.

Of course, the real advantage of the PSoC's is the extremely flexible
analog. Strapping a 20-bit del-sig ADC to anything can be absurdly
expensive, and you are generally stuck with it once you put it down.
The PSoC, on the other hand, lets you trade speed for resolution, and
play all kinds of games with the clocks; after the board is done - and
generally create the "perfect" analog hardware for the task on-chip.
Even better, you can do it on the fly by poking the configuration
registers with software.

If they are even close to being as good as Cypress claims, I've got
lots of jobs for them.

Article: 146159
Subject: Re: using an FPGA to emulate a vintage computer
From: =?ISO-8859-1?Q?Uwe_Klo=DF?= <uwe.kloss@gmx.de>
Date: Sun, 07 Mar 2010 07:33:00 +0100
Links: << >>  << T >>  << A >>
Quadibloc schrieb:
> On Mar 5, 12:44 pm, Joe Pfeiffer <pfeif...@cs.nmsu.edu> wrote:
>> #include <stdio.h>
>> int main()
>> {
>>     int a[4];
>>
>>     printf("a[2] at 0x%8x\n", &(a[2]));
>>     printf("2[a] at 0x%8x\n", &(2[a]));
>>     printf("(a+2) is 0x%8x\n", a+2);
>>     printf("(2+a) is 0x%8x\n", 2+a);
>>
>> }
>>
>> [pfeiffer@snowball ~/temp]# ./awry
>> a[2] at 0xbfff97b8
>> 2[a] at 0xbfff97b8
>> (a+2) is 0xbfff97b8
>> (2+a) is 0xbfff97b8
> 
> The 2[a] syntax actually *works* in C the way it was described? I am
> astonished. I would expect it to yield the contents of the memory
> location a+&2 assuming that &2 can be persuaded to yield up the
> location where the value of the constant "2" is stored.

You can think of the "a" in "a[4]" as a named numerical (integer)
constant (alias), giving the address of the memory block that was
allocated by the definition.

So there is no difference, between using that (named) constant or an
explicit numerical constant.

The only differences between:
   (1)   int a[4];
and:
   (2)   int * a = malloc( 4 * sizeof(int));
is the place where the memory is allocated and the value in (2) may be
changed later. (And the amount of typing, ofcourse!)

In both cases you can use "a[1]" or "*(a+1)" for access.

> Evidently there is some discrepancy between C and FORTRAN.
Only "a tiny bit"! ;-)

Grüße,
Uwe

Article: 146160
Subject: Re: using an FPGA to emulate a vintage computer
From: Ahem A Rivet's Shot <steveo@eircom.net>
Date: Sun, 7 Mar 2010 08:18:13 +0000
Links: << >>  << T >>  << A >>
On Sat, 06 Mar 2010 15:03:52 -0500
Walter Bushell <proto@panix.com> wrote:

> No, that will compare x and the right val.
> 
> = is a comparasion operator in c.

	Bzzzt wrong!

-- 
Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
The computer obeys and wins.                |    licences available see
You lose and Bill collects.                 |    http://www.sohara.org/

Article: 146161
Subject: Re: using an FPGA to emulate a vintage computer
From: Ahem A Rivet's Shot <steveo@eircom.net>
Date: Sun, 7 Mar 2010 08:29:18 +0000
Links: << >>  << T >>  << A >>
On Sat, 6 Mar 2010 01:58:43 -0800 (PST)
Quadibloc <jsavard@ecn.ab.ca> wrote:

> On Mar 5, 10:16 am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
> 
> >         No but x = *(y + 3) will store in x the contents of the memory
> > location at 3 + the value of y just as x = y[3] will and x = 3[y] will,
> > which is what I stated. You missed out the all important * and ()s.
> 
> Intentionally. My point was that, while there is _some_ truth to the
> claim that C arrays tread rather lightly on the ground of hardware
> addressing, the claim that C doesn't have arrays at all, and the C
> array subscript operator does nothing at all but add two addresses
> together... is not *quite* true.

	The C subscript operator does do nothing other than adding two
numbers and dereferencing the result, that last action is rather important.
The validity of constructs like 2[a] and *(2+a) make this clear - as does
the equivalence of a and &(a[0]) or of *a and a[0] where a is a pointer.

	C does have good support for pointers and adding integers to
pointers and for declaring blocks of storage with an array like syntax.

> If C doesn't have "real" arrays, it at least makes a rather good

	Arrays are not a real type in C if they were they would be passed
by value in function calls instead of being passed by reference and it
would be necessary to use a construct like &(a[0]) to pass the address of
the first element of the array instead of just a.

> attempt to simulate them. Unless one's standards are such that FORTRAN
> doesn't quite have "real" arrays either, and you need to go to Pascal
> for real arrays, there isn't that much to complain about in the case
> of C.

	I am not saying that the C arrays are not useful as they are, just
that they fall short of being a type in the sense that int, char, long,
pointer and struct are.

-- 
Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
The computer obeys and wins.                |    licences available see
You lose and Bill collects.                 |    http://www.sohara.org/

Article: 146162
Subject: Re: using an FPGA to emulate a vintage computer
From: Ahem A Rivet's Shot <steveo@eircom.net>
Date: Sun, 7 Mar 2010 08:45:29 +0000
Links: << >>  << T >>  << A >>
On Sat, 6 Mar 2010 02:01:30 -0800 (PST)
Quadibloc <jsavard@ecn.ab.ca> wrote:

> The 2[a] syntax actually *works* in C the way it was described? I am
> astonished. I would expect it to yield the contents of the memory
> location a+&2 assuming that &2 can be persuaded to yield up the
> location where the value of the constant "2" is stored.

	Yes of course it does - why else would I have mentioned it in my
first post in this threadlet ? a is a pointer, 2 is in a integer and 2[a]
is the same as a[2] is the same as *(a+2) and the rules for adding pointers
and integers are well defined in C. This is the heart of my original point,
array notation in C is syntactic sugar for pointer arithmetic (and also
for allocation which I neglected to mention in my original post).

-- 
Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
The computer obeys and wins.                |    licences available see
You lose and Bill collects.                 |    http://www.sohara.org/

Article: 146163
Subject: Question in verilog testbench
From: Frank <yangyang880729@gmail.com>
Date: Sun, 7 Mar 2010 01:56:22 -0800 (PST)
Links: << >>  << T >>  << A >>
Hi, all

I have a question in the testbench written by verilog. Why we always
define the inputs of MUT as reg and outputs of MUT as wire, just the
opposite with the in/output definition in verilog modules.
So more clearly, what are the basic issues that I should know when I
have to decide the type of a variable(reg or wire)?

Thanks
Frank

Article: 146164
Subject: Re: Laptop for FPGA design?
From: John Adair <g1@enterpoint.co.uk>
Date: Sun, 7 Mar 2010 02:25:48 -0800 (PST)
Links: << >>  << T >>  << A >>
If you can get it the T9900 is better than T9800 but they are fairly
rare with most companies seem to push the quad core instead.

I have not got a mobile I7 yet but we do have desktop I7 and they have
been very good. Laptops using the desktop I7 have been a definate no
on battery lifetime of 1hr being typical but when I get the chance I
will try the mobile I7 as it promises much. Parallel processors will
be more use in a couple of years when tools have better use of them.

On OS I think there are X64 drivers but I would only go that way if I
had a really large design to deal with. Bugs and problems are far more
common in X64 and Linux versions of the tools and with the relatively
tiny user base bugs can take a while to surface and dare I say it get
fixed. Life is busy enough without adding unnecessary problems.

John Adair
Enterpoint Ltd.

On 6 Mar, 21:45, Michael S <already5cho...@yahoo.com> wrote:
> On Mar 6, 5:37 pm, John Adair <g...@enterpoint.co.uk> wrote:
>
> > I7 laptops if you can get the mobile versions are best but do watch
> > the battery lifetime. Most software isn't using the multiple cores so
> > the next best Core2 duo (T9800) based can be good too.
>
> Why not T9900?
> At single FPGA compilation it should easily beat any 35W member of
> core-i7 family, including i7-620M. FPGA tools love on-chip cache above
> anything else. In fact, it's possible that even T9600 is faster than
> i7-620M.
> i7-820QM would be faster, yet, but at 45W TDP you will likely find it
> only in special heavyweight models.
>
> Of course, if you often find yourself compiling several variants in
> parallel, stick with i7/i5, since in that scenario core2duo is pretty
> weak.
>
> >Have a look at
> > a HP 8730w for one based on T9800. This can go 3-4hrs and double that
> > with an extension battery they have that clips on. That's series
> > computing on battery for most of a normal man's working day. There are
> > quad core ones too but I don't think the extra money is worth
> > spending. Better to spend money on a good SSD drive which are great
> > for making things go faster too if you can get the right one.
>
> > HP are going to release a 8740w at some point in time. Hopefully soon
> > and that is I7 based from what little is in the public domain.
>
> > HP are still offering XP downgrades last time I looked if you want to
> > use Windows as OS. That is one the reasons I use them. Dell do the
> > same.
>
> Do they offer XP64 drivers?
> XP32 is sufficient in for 98% of todays FPGA but the upper 2% (the
> biggest Stratix-IV devices, for example) require 64-bit tools.
>
>
>
> > John Adair
> > Enterpoint Ltd.


Article: 146165
Subject: Re: Question in verilog testbench
From: Jon Beniston <jon@beniston.com>
Date: Sun, 7 Mar 2010 02:30:02 -0800 (PST)
Links: << >>  << T >>  << A >>
You need to set the type according to how it will be assigned a value.
reg if written to in an always or initial block, wire otherwise.

Cheers,
Jon

Article: 146166
Subject: Re: Question in verilog testbench
From: Jonathan Bromley <jonathan.bromley@MYCOMPANY.com>
Date: Sun, 07 Mar 2010 10:39:10 +0000
Links: << >>  << T >>  << A >>
On Sun, 7 Mar 2010 01:56:22 -0800 (PST), Frank wrote:

[We really must write this up as a FAQ.....]

>I have a question in the testbench written by verilog. Why we always
>define the inputs of MUT as reg and outputs of MUT as wire,

Not "always".  The MUT (DUT) outputs must be connected to wires,
but the inputs can be connected either to reg or to wire - it's
your choice.

>opposite with the in/output definition in verilog modules.

It is sometimes that way, but don't worry about it.  That
is not the basic problem.  See below.

>So more clearly, what are the basic issues that I should know when I
>have to decide the type of a variable(reg or wire)?

The rules are extremely simple:

****************************************************************
*  If an object is given its value by assignment in procedural *
*  code, then that object must be a variable (reg).            *
*                                                              *
*  If an object is given its value by any other method, then   *
*  that object must be a net (wire).                           *
****************************************************************

However, it is often difficult to get this exactly right
in practice.  

First there is the problem of definition.  In my wording
of the rules, above, what do I mean by "assignment in 
procedural code"?
- The thing on the left-hand side of an assignment (= or <=)
  in the body of an always, initial, task or function;
- anything that is passed to a task's inout or output
  argument.

"Given its value by any other method" means any of these:
- it is connected to an output or inout port of a module instance;
- it is connected to an output or inout port of a UDP or
  primitive instance;
- it is the left-hand side of a continuous assignment ("assign")
  statement at the top level of a module;
- it is an input or inout port of a module.

As a result of the rules.....

- Inside a module, any inout or input ports must be a net (wire)
  because it gets its value from whatever you connect to the 
  port when you instance the module.
- Inside a module, an output port can be either a net (wire)
  or a variable (reg); the choice depends on how that thing 
  gets its value within the module's code.  The value of the 
  thing is passed out through
  the port to the wire you connect on the outside.
- When you create an instance of a module, you must connect
  wires to all its output and inout ports because those wires
  get their value by connection, not by procedural assignment.
- When you create an instance of a module, you are free to connect
  either a wire or a reg to its input port.  The choice depends
  on how that thing gets its value in your outer module.

Hope this helps.  Note that the rules are slightly different in
SystemVerilog, but it's a good idea to learn the Verilog basics!
-- 
Jonathan Bromley

Article: 146167
Subject: Re: using an FPGA to emulate a vintage computer
From: Greg Menke <gusenet@comcast.net>
Date: Sun, 07 Mar 2010 07:48:01 -0500
Links: << >>  << T >>  << A >>

Ahem A Rivet's Shot <steveo@eircom.net> writes:

> On Sat, 6 Mar 2010 01:58:43 -0800 (PST)
> Quadibloc <jsavard@ecn.ab.ca> wrote:
>
>> On Mar 5, 10:16 am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
>> 
>> >         No but x = *(y + 3) will store in x the contents of the memory
>> > location at 3 + the value of y just as x = y[3] will and x = 3[y] will,
>> > which is what I stated. You missed out the all important * and ()s.
>> 
>> Intentionally. My point was that, while there is _some_ truth to the
>> claim that C arrays tread rather lightly on the ground of hardware
>> addressing, the claim that C doesn't have arrays at all, and the C
>> array subscript operator does nothing at all but add two addresses
>> together... is not *quite* true.
>
> 	The C subscript operator does do nothing other than adding two
> numbers and dereferencing the result, that last action is rather important.
> The validity of constructs like 2[a] and *(2+a) make this clear - as does
> the equivalence of a and &(a[0]) or of *a and a[0] where a is a pointer.

Yet when dereferencing arrays of rank >= 2, dimensions are automatically
incorporated into the effective address, so its not quite equivalent to
a simple addition of pointer and offset.

Gregm

Article: 146168
Subject: Re: Modelsim PE vs. Aldec Active-HDL (PE)
From: Brian Drummond <brian_drummond@btconnect.com>
Date: Sun, 07 Mar 2010 13:08:57 +0000
Links: << >>  << T >>  << A >>
On Sat, 6 Mar 2010 17:26:24 -0800 (PST), KJ <kkjennings@sbcglobal.net> wrote:

>I confess though, I'm not quite sure what your point is here for
>compiling stuff into separate libraries.  It *sounds* like you're
>talking about organizing source files into separate 'libraries'...in
>which case what you said would make more sense but that's not at all
>the same thing as compiling something into a library other than
>'work'.

You can't keep source files in separate libraries - separate folders, yes, 
hence (I presume) your quotes around 'libraries'.

While I do that, I also tend to use a VHDL library structure that reflects the
same source folder structure and design intent. 

Except when tool bugs prevent it.

It's really mostly habit, a satisfactory (to me) way of working acquired back in
my Modula-2 days. I'm not making wild claims that it's better in any substantial
way, but VHDL was designed to allow compilation into separate libraries...

Speculating why,  it would presumably have allowed compiled libraries to have
been re-used across multiple projects, in the days when compilation and
synthesis were expensive operations. Nowadays, that is trivially unimportant,
except for the highly optimised treatments given to standard libraries.

Tools could make use of it for distributing IP as compiled libraries instead of
source, but they don't...

... aah, that must be it - I'm just waiting for the tools to catch up! ;-)

- Brian

Article: 146169
Subject: Re: using an FPGA to emulate a vintage computer
From: Quadibloc <jsavard@ecn.ab.ca>
Date: Sun, 7 Mar 2010 05:35:51 -0800 (PST)
Links: << >>  << T >>  << A >>
On Mar 7, 1:45=A0am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
> On Sat, 6 Mar 2010 02:01:30 -0800 (PST)
>
> Quadibloc <jsav...@ecn.ab.ca> wrote:
> > The 2[a] syntax actually *works* in C the way it was described? I am
> > astonished. I would expect it to yield the contents of the memory
> > location a+&2 assuming that &2 can be persuaded to yield up the
> > location where the value of the constant "2" is stored.
>
> =A0 =A0 =A0 =A0 Yes of course it does - why else would I have mentioned i=
t in my
> first post in this threadlet ? a is a pointer, 2 is in a integer and 2[a]
> is the same as a[2] is the same as *(a+2) and the rules for adding pointe=
rs
> and integers are well defined in C. This is the heart of my original poin=
t,
> array notation in C is syntactic sugar for pointer arithmetic (and also
> for allocation which I neglected to mention in my original post).

If a[2] was the same as *(a+2), then, indeed, since addition is
commutative, it would make sense that 2[a], being the same as *(2+a),
would be the same.

But a[2] is the same as *(&a+2) which is why I expected 2[a] to be the
same as *(&2+a).

Unless in *(a+2) "a" suddenly stops meaning what I would expect it to
mean. In that case, C has considerably more profound problems than not
having arrays.

John Savard

Article: 146170
Subject: Spartan 3 minimum clock pulse width
From: "Andrew Holme" <ah@nospam.com>
Date: Sun, 7 Mar 2010 13:37:51 -0000
Links: << >>  << T >>  << A >>
What's the minimum clock pulse width I can drive into a Spartan 3 global 
clock input via LVDS?  I'm using the -5 speed grade device with VDDC cranked 
up to 1.25V.  I've currently got it working at 200 MHz ~ 50% duty = 2.5ns 
pulses; but I want to lower the duty cycle.  Can I squeeze the pulses to 
below 2ns?  I can't test this without re-spinning my board.  I see 
sub-nanosecond minimum clock widths and 720 MHz maximum toggle rates in the 
datasheet; but I also see max 10pF input capacitance.  50% duty cycle keeps 
the swing centred on VCM; but I lose that advantage with my low duty cycle. 
I still reckon it should work; but it would be good to hear from others 
who've pushed these limits.

TIA



Article: 146171
Subject: Re: Question in verilog testbench
From: Frank <yangyang880729@gmail.com>
Date: Sun, 7 Mar 2010 05:43:15 -0800 (PST)
Links: << >>  << T >>  << A >>
On Mar 7, 6:39=A0pm, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com>
wrote:
> On Sun, 7 Mar 2010 01:56:22 -0800 (PST), Frank wrote:
>
> [We really must write this up as a FAQ.....]
>
thank you, that really helps:)


Article: 146172
Subject: Re: Laptop for FPGA design?
From: Michael S <already5chosen@yahoo.com>
Date: Sun, 7 Mar 2010 06:11:07 -0800 (PST)
Links: << >>  << T >>  << A >>
On Mar 7, 12:25 pm, John Adair <g...@enterpoint.co.uk> wrote:
> If you can get it the T9900 is better than T9800 but they are fairly
> rare with most companies seem to push the quad core instead.
>
> I have not got a mobile I7 yet but we do have desktop I7 and they have
> been very good.

Sure, desktop I7 are fast. With 8MB of cache and not so heavy reliance
on turbo-boost one can expect them being fast.
On the other hand, 35W mobile variants have 4MB or smaller cache and
are critically dependent on turbo-boost, since  relatively to mobile
C2D their "normal" clock frequency is slow.
Still, it just my guts feeling, I never benchmarked mobile i7 vs
mobile C2D, so I could be wrong about their relative merits.

> Laptops using the desktop I7 have been a definate no
> on battery lifetime of 1hr being typical but when I get the chance I
> will try the mobile I7 as it promises much. Parallel processors will
> be more use in a couple of years when tools have better use of them.
>
> On OS I think there are X64 drivers but I would only go that way if I
> had a really large design to deal with. Bugs and problems are far more
> common in X64 and Linux versions of the tools and with the relatively
> tiny user base bugs can take a while to surface and dare I say it get
> fixed. Life is busy enough without adding unnecessary problems.
>
> John Adair
> Enterpoint Ltd.
>

For the last year or so we do nearly all our FPGA development on
Ws2003/x64. So far, no problems. Even officially deprecated Rainbow
(now SafeNet)  USB Software Guards work fine. XP64 is derived from the
same code base.
We almost never use 64-bit tools, but very much appreciate the ability
to launch numerous instances of memory-hungry 32-bit tools. More a
matter of convenience than necessity? In single-user environment, yes.
But why should we give up convenience that costs so little?




Article: 146173
Subject: Re: Spartan 3 minimum clock pulse width
From: nico@puntnl.niks (Nico Coesel)
Date: Sun, 07 Mar 2010 14:13:20 GMT
Links: << >>  << T >>  << A >>
"Andrew Holme" <ah@nospam.com> wrote:

>What's the minimum clock pulse width I can drive into a Spartan 3 global 
>clock input via LVDS?  I'm using the -5 speed grade device with VDDC cranked 
>up to 1.25V.  I've currently got it working at 200 MHz ~ 50% duty = 2.5ns 
>pulses; but I want to lower the duty cycle.  Can I squeeze the pulses to 
>below 2ns?  I can't test this without re-spinning my board.  I see 
>sub-nanosecond minimum clock widths and 720 MHz maximum toggle rates in the 
>datasheet; but I also see max 10pF input capacitance.  50% duty cycle keeps 
>the swing centred on VCM; but I lose that advantage with my low duty cycle. 
>I still reckon it should work; but it would be good to hear from others 
>who've pushed these limits.

Isn't there a duty cycle limit in the datasheets? I did have some
problems driving a 100MHz clock at CMOS level into a Spartan 3. It
turned out the duty cycle was around 30% because the driver couldn't
handle 100MHz.

-- 
Failure does not prove something is impossible, failure simply
indicates you are not using the right tools...
nico@nctdevpuntnl (punt=.)
--------------------------------------------------------------

Article: 146174
Subject: Re: Laptop for FPGA design?
From: General Schvantzkoph <schvantzkoph@yahoo.com>
Date: 7 Mar 2010 14:28:50 GMT
Links: << >>  << T >>  << A >>
On Sun, 07 Mar 2010 06:11:07 -0800, Michael S wrote:

> On Mar 7, 12:25 pm, John Adair <g...@enterpoint.co.uk> wrote:
>> If you can get it the T9900 is better than T9800 but they are fairly
>> rare with most companies seem to push the quad core instead.
>>
>> I have not got a mobile I7 yet but we do have desktop I7 and they have
>> been very good.
> 
> Sure, desktop I7 are fast. With 8MB of cache and not so heavy reliance
> on turbo-boost one can expect them being fast. On the other hand, 35W
> mobile variants have 4MB or smaller cache and are critically dependent
> on turbo-boost, since  relatively to mobile C2D their "normal" clock
> frequency is slow. Still, it just my guts feeling, I never benchmarked
> mobile i7 vs mobile C2D, so I could be wrong about their relative
> merits.
> 
>> Laptops using the desktop I7 have been a definate no on battery
>> lifetime of 1hr being typical but when I get the chance I will try the
>> mobile I7 as it promises much. Parallel processors will be more use in
>> a couple of years when tools have better use of them.
>>
>> On OS I think there are X64 drivers but I would only go that way if I
>> had a really large design to deal with. Bugs and problems are far more
>> common in X64 and Linux versions of the tools and with the relatively
>> tiny user base bugs can take a while to surface and dare I say it get
>> fixed. Life is busy enough without adding unnecessary problems.
>>
>> John Adair
>> Enterpoint Ltd.
>>
>>
> For the last year or so we do nearly all our FPGA development on
> Ws2003/x64. So far, no problems. Even officially deprecated Rainbow (now
> SafeNet)  USB Software Guards work fine. XP64 is derived from the same
> code base.
> We almost never use 64-bit tools, but very much appreciate the ability
> to launch numerous instances of memory-hungry 32-bit tools. More a
> matter of convenience than necessity? In single-user environment, yes.
> But why should we give up convenience that costs so little?

I have benchmarked Core2s vs iCore7s. 6M Cache iCore2s are faster on a 
clock for clock basis then the 8M Cache iCore7 when running NCVerilog. 
The iCore7 is a little faster on a clock for clock basis when running 
Xilinx place and route tools. The cache architecture of the iCore7 sucks, 
it's a three level cache vs a two level cache on the Core2. Also there is 
less cache per processor on the iCore7 (2M) then the Core2 (3M) so the 
degradation in performance is greater. Finally the absolute clock rate 
for the Core2s is higher then it is for the iCore7, combine that with the 
faster clock for clock simulation performance and the Core2 is the clear 
winner for FPGA development.




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search