Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 19750

Article: 19750
Subject: Re: Design security
From: Pat <PDobson@sercoalpha.demon.co.uk>
Date: Tue, 11 Jan 2000 10:10:03 +0000
Links: << >>  << T >>  << A >>
In FPGA encryption, the key is only temporarily stored on the FPGA, it
is generally generated by a controlling processor during the handshake
process of PKC. A lot of algorithms are completely public domain, a good
example is DES (Data Encryption Standard). You can find details of this
all over the Internet, for free (usually), the clever bit comes with the
use of public and private keys, key management and 'trusted' user
groups. Basically, who cares if the FPGA is compromised and analysed
because all that will be found is DES !
        As far as how strong an algorithm do you need to use, it all
depends who it's being designed for. If you design one for a bank
they'll probably have different requirements to that of a countries
government and each organisation will have it's own checking procedure
(and will probably be quite secretive about it too !)
        When you say minimal silicon area, quite how small do you expect
(want) algorithms to be ??


                        PAT.

In article <859i2q$ga3@src-news.pa.dec.com>, Hal Murray
<murray@pa.dec.com> writes
>
>> But you do - the whole config memory needs to go on chip, not just the
>> security bit(s), and  you'll want the config memory re-writable as well,
>> otherwise the anitfuse devices are equivalent.
>
>Sorry I wasn't clear.  I was thinking of the case where the bit stream
>was encrypted and there was some decryption logic on the FPGA.  Then
>the only storage you need on the FPGA is for the decryption key.
>
>I like your suggestion of making the key area write-once.
>
>
>Humm..  Suppose we had an FPGA that processed and encrypted bit
>stream.  How good does the encryption have to be?
>
>I've seen a lot of work on designing encryption algorithims that
>are good and can be implemented to run fast on modern CPUs.  Is
>there any work on algorithims that use minimal silicon area?
>

-- 
Pat
Article: 19751
Subject: Re: Decoding RSPC (Reed Solomon Product Code)
From: "MK Yap" <mkyap@REMOVE.ieee.org>
Date: Tue, 11 Jan 2000 18:37:05 +0800
Links: << >>  << T >>  << A >>
Hi!

So I'm shortening the RS(31,29) to become RS(26,24)... for encoding and
decoding, the feedback arithmetic addition and multiplication is based on
GF(2^8) so there shouldn't be any prob....

Quite a headache for me, I can't even get the syndrome correct..

for many of the downloaded progs that support shortened RS, when i changed
the nn ( i.e. RS (nn,kk,8) )from 31 to 26, the result wouldn't be correct
unless the condition nn = 2^m-1 is met.. I manually  generate the GF(2^8)
and use it for encoding and decoding.. well, still the same... i can't get
the correct RS parity bytes...    Any idea??

Would appreciate if someone can shed some light on it.


MKYap

MK Yap <mkyap@ieee.org> wrote in message
news:85c7uk$hue$1@clematis.singnet.com.sg...
> Hi,
>
> Thanks for all the responses... I have got hold of the few RS codec
> programs, still playing ard with them...
>
> How does the actual encoding and decoding process vary with the shortened
RS
> code, say RS (26,24)? Can I just feed in the data byte(m=8) into the
encoder
> (shift registers) as per normal? ie. shift in 24 data bytes ( and shift
> another 2 empty bytes??) and the register content is the 2 parity bytes?
>
> I have a few sets of sample answer but I can never get to the answer no
> matter how i shift...
>
> btw, how do u shorten the normal RS(31,27) (double error correcting??) to
> RS(26,24) ?
>
> Thanks for all helps/advice  :-)
>
>
> MKYap
>
>
> <jhirbawi@yahoo.com> wrote in message news:85b91j$2tn$1@nnrp1.deja.com...
> > In article <84savt$bo2$1@violet.singnet.com.sg>,
> >   "MK Yap" <mkyap@ieee.org> wrote:
> > > Hi,
> > >
> > > I'm writing a prog (VHDL or C) to enable block encoding and decoding
> > of CD
> > > sectors.  In the ECC (error correction coding) field, RSPC(Reed
> > Solomon
> > > Product Code) is used. The RSPC is a product code over GF(2^8)
> > producing P
> > > and Q parity bytes.  The GF(2^8) field is generated by the primitive
> > > polynomial
> > > P(x) = x^8 + x^4 + x^3 + x^2 + 1
> > > The P parities are (26,24) RS codeword over GF(2^8) and the Q parities
> > are
> > > (45,43) RS codeword over GF(2^8).
> > >
> > > My question is: How can I write the encoding and decoding algorithm
> > for the
> > > ECC field?? The RS used are non standard RS codes (n,k) in which n is
> > > usually n=2^m -1 which m=8 in this case...
> > > I tried to look for more info from books but it is really limited... I
> > came
> > > across some books saying that conventional RS decoding can be used..
> > that is
> > > the berlekamp, Peterson and Weldon algorithm.  But I see no connection
> > > between them coz the derivation is based on a fundamental which is
> > > different.
> > >
> > > Pls enlighten... by providing some books, paper, web site or perhaps
> > > explanation of theory behind them...  Thank you very much!!
> >
> > You're dealing with shortened Reed-Solomon codes; you may not find much
> > that specifically describes their encoding and decoding beacause it is
> > almost the same as the non-shortened version. For the encoder, encode as
> > a generic cyclic code over GF(2^8). For the decoding the Euclidean
> > Algoithm decoder will work for the shortened codes the same way it would
> > for the non-shortened ones with n=your codeword length instead of 2^m-1.
> > I'm sure the Berlekamp-Massey algorithm will also work with only minor
> > modifications.
> >
> > Jacob Hirbawi.
> >
> >
> > Sent via Deja.com http://www.deja.com/
> > Before you buy.
>
>


Article: 19752
Subject: RISC in FPGA?
From: "Damjan Lampret" <lampret@opencores.org>
Date: Tue, 11 Jan 2000 11:54:55 +0100
Links: << >>  << T >>  << A >>
Hi,

does anyone know of any 32 bit RISC processor that runs at 50 MHz or faster
in FPGA (Virtex or Apex FPGAs)? Thanks.

regards, Damjan



Article: 19753
Subject: Altera Flex10K bitstream compatibility ?
From: Nicolas Matringe <nicolas@dotcom.fr>
Date: Tue, 11 Jan 2000 11:59:06 +0100
Links: << >>  << T >>  << A >>
Hi
One of our engineers who doesn't know much about FPGAs just told me "All
right(, I replaced the 10K10 with a 10K20, this should work fine". I
stopped him, telling that it's not because they are pin to pin
compatible that they can be programmed with the same bitstream.
Any of you knows what would happen if we tried? Just curious...

Nicolas MATRINGE           DotCom S.A.
Conception electronique    16 rue du Moulin des Bruyeres
Tel 00 33 1 46 67 51 11    92400 COURBEVOIE
Fax 00 33 1 46 67 51 01    FRANCE
Article: 19754
Subject: Re: hobbyist friendly pld?
From: "Stewart, Nial [HAL02:HH00:EXCH]" <stewartn@europem01.nt.com>
Date: Tue, 11 Jan 2000 12:03:09 +0000
Links: << >>  << T >>  << A >>
Richard Erlacher wrote:
> Actually, the software from VANTIS, now owned by LATTICE, is, indeed,
> free.  Unfortunately, most of the smaller devices, and this applies
> not only to Lattice/Vantis, but to most others as well,  require a
> relatively complex programmer which implements the programming
> algorithims that the component manufacturers won't give you and the
> larger parts are so pin-rich that you have little choice but to
> program them in-situ, which is inconvenient, since you can't socket
> them without spending more than what a programmer for the small
> devices would cost, and such large-pin-count packages are not very
> friendly to the hobbyist, particularly since you have to build them
> into a PCB.
> Dick
> >Dan Rymarz wrote:
> >> Hello all,
> >> I am looking for a programmable logic technology I can use that
> >> also has a free+permanant (not 30 day trial) compiler available,
> >> that uses JTAG or similar few-wire (4 for jtag etc.) programming
> >> mode.  I don't need a large gate count.  ...


I would recommend a look at Altera devices. They now give away (well,
you
have to download it) a free version of their Maxplus2 software that will
handle the 7000Max devices, with VHDL design entry and which will drive
a 
JTAG programmer. A data sheet with a schematic for the JTAG isp 
programmer (ByteblasterMV, it's just a 74HC244 with a few termination
resistors) is also available from their site.


Try 

http://www.altera.com

Nial Stewart.
Article: 19755
Subject: CPLD interconnect?
From: graham@staff-pc69.hscs.wmin.ac.uk (Graham Seaman)
Date: 11 Jan 2000 13:45:27 GMT
Links: << >>  << T >>  << A >>
Hi,

I'm trying to find out how the universal interconnect matrix
on modern CPLDs works (I mean the classic multiple-PAL style,
not what Altera means by CPLD), but am finding it hard
to find any references at all. I believe they're mux based
with overlapping subgroups of 'column' signals connected
to muxes of varying sizes - but beyond that vague statement
know nothing at all!
Does anyone know of any good references on this topic wrt
speed/flexibility tradeoffs of the alternatives? (preferably
not just sales talk ;-) 

Thanks
Graham

Article: 19756
Subject: Re: HW resources increased
From: "Paul Butler" <c_paul_butler@yahoo.com>
Date: Tue, 11 Jan 2000 07:49:18 -0600
Links: << >>  << T >>  << A >>

Pat <PDobson@sercoalpha.demon.co.uk> wrote in message
news:ubjMkCAg7ve4Ixk+@sercoalpha.demon.co.uk...
> Hello,
>         As far as I can see (and you can call me a cynic if you like),
> the increase in requirements of SW is driven by profit (of course). When
> people see software that they (think) they need and it tells them tell
> need to double their memory, or by a graphics accelerator, then they go
> out and do it. This feeds the hardware industry by giving it challenges
> to design new gizmos for the software people to develop with. Basically
> it's a vicious circle. What it means also is when SW designers (and
> hardware) come across a problem, they don't try to sort it out, they
> just increase the resources available to themselves, and everything gets
> bigger and more expensive !
>
>
>                         PAT.

Microsoft Word has tons of features I don't need and it uses a lot of
memory, CPU cycles and disk space.  That kind of thing may encourage bigger
faster computers but so far, the hardware has only gotten cheaper.  The
worst thing about this is that even though I'm satisfied with the old stuff,
I (almost) have to upgrade if I'm going to share files with other people.
Fortunately for me, most of the stuff I send and receive is plain ASCII text
(like this message) and I don't have Word installed at all on this computer.

>
> In article <387AD481.58C40A0C@ieee.org>, Jamil Khaib <Khatib@ieee.org>
> writes
> >
> >Hi,
> >In the last few years the hardware resources for ASIC, FPGA, and CPLD
> >designers was improved in the manner of hw size, speed and fabrication
> >delay.
> >
> >SW programmers now also have high speed processors, large memories
> >advanced compilers and visual tools. although all these resources are
> >available for programmers, but -as I see- they do not improve their sw
> >_in_the_same_ratio_as_the improvements_of_the_resources. For example all
> >new sw versions need larger memories and faster processors without the
> >increase of the functionality of the new version.

All the software I'm aware of runs perfectly well on a computer that costs
$1000 dollars today.  Much of that software will not run reasonably on a
computer that cost $1000 five years ago.  In my opinion, software bloat is
not out-pacing HW advances.

> >This is because they
> >always think that they will have larger memory and faster processors and
> >they do not have time to optimize their code nor to calculate how much
> >resources they need as they did in the past.

You paint a picture of a bunch of programmers sitting around with nothing to
do because they've given up optimizing their code.  In reality, they're
spending that extra time inventing NEW stuff and, to a lesser extent, making
the old stuff more reliable.  It's a good thing.

> >
> >Since the hardware technology becomes to offer to HW designers more than
> >what they need I think they will start doing the same as what SW
> >programmers doing now.

Gates are cheaper and faster than ever - Hurray!  I'd rather spend more time
inventing and less time tweeking down the gate count.  If HW technology
allows me to invent cool stuff in less time by making gates cheap enough to
waste, I'm all for it.  If I can build a more maintainable design by wasting
a few gates, I'll do it.  And don't forget that the new FPGAs are getting
big enough to do some real work so I can't even complain about NREs and
manufacturing delays - rats!

> >
> >Do you think like me? Do you know how can we prevent this?
> >I think this can be prevented by following the Open Source and open
> >Hardware design concepts in the design. You can read more about this
> >idea in OpenIPCore Project at http://www.openip.org/oc
> >

Shared code is a great way to crank out designs faster.  It's also a great
way to waste resources by including features you don't need but don't want
to optimize out.  You can start with a proven core and then waste design
time by trying to make is smaller and faster (and probably screwing up a
good thing) or you can stick with what works, waste a few gates, and get on
to the problems only you can solve.

> >Thanks
> >Jamil Khatib
> >OpenIP Organization     http://www.openip.org
> >OpenIPCore Project      http://www.openip.org/oc
> >OpenCores Porject       http://www.opencores.org
> >
> >
>
> --
> Pat
>

Paul Butler



Article: 19757
Subject: Re: SDRAM controller ?
From: Ray Andraka <randraka@ids.net>
Date: Tue, 11 Jan 2000 14:11:27 GMT
Links: << >>  << T >>  << A >>
You probably want to use something that is capable of registering the
bidirectional I/O at the IOB in both directions to keep the clock to Q and setup
times independent of route and short enough to keep the transfer speeds up.
That rules out Altera 10K.  Also, you will probably want something with an
on-chip PLL or DLL to reduce the clock skew between the IOB registers and the
SDRAM clock, which puts you with one of the new families like Xilinx Virtex or
Altera Apex

"Simon D. Wibowo" wrote:

> Hi,
>
> What, do you think, is the best FPGA for SDRAM controller ? Say, up to PC100
> ?
>
> TIA,
> simon

--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email randraka@ids.net
http://users.ids.net/~randraka


Article: 19758
Subject: Re: Lucent Orca designs
From: Bob Wagner <rjwagner2@lucent.com>
Date: Tue, 11 Jan 2000 10:12:05 -0500
Links: << >>  << T >>  << A >>


Rickman wrote:
> 
> I am in the middle of creating a couple of FPGA designs using the Lucent
> Orca OR3T and OR2T families and I thought I would post some of my
> results.
>
> In contrast, the Lucent/Viewlogic tools are at least a year if not two
> behind in capability and perhaps more in quality. I found out recently
> that the timing constraints must be processed out of the normal tool
> flow in order to use any type of general specification (other than point
> A to point B has XX ns delay!). These "logical" constraints are then
> processed into a file of literally thousands of entries which list each
> combination of start and end point one at a time. The final trace report
> has no way to correlate the findings back to the original spec entered
> by the user.
> 

Rick,

I would encourage you to review the Orca Foundry 9.4 software release 
(shiping Jan 15th) as the tools have been enhanced to clean up
this "logical" preferencing flow.

It is now possible to simply enter the "logical" timing preferences into
the 
.prf file.  The tools will understand these preferences without having
to expand them into another file as before and you will be able to
review
the results and correlate them to your initial entry.

This enhancement includes support for the use of wildcards in the
logical
preferences as well.

Other enhancements will include a floorplanning tool, guided design,
STAMP
timing model support for improved FPSC timing analysis, and many
run-time
enhancements.

Check it out.

Bob Wagner
Lucent Technologies
New England Region FPGA/FPSC FAE
Article: 19759
Subject: Re: HW resources increased
From: peter@abbnm.com (Peter da Silva)
Date: 11 Jan 2000 15:14:20 GMT
Links: << >>  << T >>  << A >>
In article <s7md4lraoj883@corp.supernews.com>,
Paul Butler <c_paul_butler@yahoo.com> wrote:
> Microsoft Word has tons of features I don't need and it uses a lot of
> memory, CPU cycles and disk space.

Indeed.

I just installed Microsoft Word 5.0 on a Macintosh SE/30. That's a 16 MHz
68030. Word 5.0 was *one file*, about 300k in size, and ran in a 384k memory
partition under MacOS 6.0.8, leaving me about 7M of the 8M on the box free.

So far as I could tell, there was no feature in Word 7.0 that I actually used
that was missing in Word 5.0, other than the ability to read Word 7.0 files.

The only thing I missed was the Windows keyboard navigation, and that's
something Microsoft had in the first version of Windows... and *that* ran
on a PC/XT under a 280k DoubleDOS memory partition while I was doing a
compile in the other 300+k.

(actually, that was Windows 2.0, but I'm sure 1.0 wasn't larger)

> The worst thing about this is that even though I'm satisfied with the old
> stuff, I (almost) have to upgrade if I'm going to share files with other
> people.

Indeed.

> Fortunately for me, most of the stuff I send and receive is plain ASCII text
> (like this message) and I don't have Word installed at all on this computer.

"Powered by 'vi'" (substitute Emacs, gvim, Brief, etc...)

> All the software I'm aware of runs perfectly well on a computer that costs
> $1000 dollars today.

This is a use of the term "perfectly well" I'm not familiar with.

When I hit the right mouse button in Windows to bring up a contextual menu,
I often have to wait for Windows to *hit the disk* to figure out what options
to bring up. This can take as long as fifteen or twenty seconds. To bring up
a MENU. That's unacceptable.

On a ten year old Mac (the abovementioned SE/30) I can bring up any menu, and
even open folders that have been opened recently, without hitting the disk. My
fifteen year old Amiga behaved the same way... on floppies!

Windows goes out and rebuilds the entire desktop at, as near as I can tell,
utterly random times. Every time it does that it's an interruption.

OK, the Mac didn't have contextual menus ten years ago. The principle of
caching frequently used information still holds. Look, my UNIX Window Manager
(WindowMaker) caches menus it reads from disk. I've even got a program that
goes out and generates summaries of web sites and sticks them into my menu
and that doesn't even phase it.

It's been said that 90% of programming is an excersize in caching. Why is that
so hard a lesson to learn? I've got a Windows box with more memory than the
first computer network I was on, back in '72, had in total disk storage! Hell,
I've got more RAM than the Mac in question has hard disk. Why can't they use
those resources to speed things up?

> Much of that software will not run reasonably on a
> computer that cost $1000 five years ago.

Sure it would. It just wouldn't run on a *wintel* PC that cost $1000 five
years ago. A 1985 Amiga, given a reasonable amount of RAM, or a 1990
Macintosh... these boxes would both have been well under $1000 five years
ago.

> You paint a picture of a bunch of programmers sitting around with nothing to
> do because they've given up optimizing their code.  In reality, they're
> spending that extra time inventing NEW stuff and, to a lesser extent, making
> the old stuff more reliable.  It's a good thing.

In what universe are they making the old stuff more reliable? In the mid-80s
people used to chortle at the Amiga's tendency to crash because multitasking
without memory management led to one program's crash bringing the whole thing
down. With proper memory management that would never happen.

The same users who used to jeer at the Amiga now have boxes that put the
mainframes of the mid-80s to shame, with MMUs and virtual machine support,
and when the machine locks up or crashes that's just treated as normal. Why?
My AT&T 3b1, a 68010-based machine even older than the Amiga... running UNIX
with a custom GUI in 2M RAM... has never crashed from errant programs. Windows
NT was developed in its entirety years later, and they still haven't managed
to solve that problem.

MacOS 9? Same problem. Maybe MacOS X will get them over the hump.

(All operating systems suck, but damn, where X-Windows sucks golf balls
 through soda straws, Microsoft Windows can suck asteroids through
 millipore filters...)

There's got to be some fundamental change in the OS design feild to really
take advantage of the sorts of hardware resources we've got these days. I
don't know what that change will be... companies like Be talk about being
some kind of new paradigm but they don't really seem to be able to explain
just what that new paradigm is.

I hope it's something that lets me plug components running different kinds
of OS together in the same machine, because I really like the idea of a PC
built like the heterogenous LAN I have at work.

-- 
In hoc signo hack, Peter da Silva <peter@baileynm.com>
 `-_-'   Ar rug tú barróg ar do mhactíre inniu? 
  'U`
         "I *am* $PHB" -- Skud.
Article: 19760
Subject: Re: orca3t125 clock problems
From: Bob Wagner <rjwagner2@lucent.com>
Date: Tue, 11 Jan 2000 10:33:45 -0500
Links: << >>  << T >>  << A >>
Jas,

If the 20 mhz clock is generated internally to the FPGA then
the tool should be able to identify the change on the falling
edge assuming you have a freqency or period preference on 
both clocks.

If not, or if you want to make sure, you can use the 
MULTICYCLE timing preference to specifiy the relationship from 
a source clock to a destination clock for all ff's

MULTICYCLE "m1" START CLKNET "clk_20" END CLKNET "clk_40"  12.5 ns;
MULTICYCLE "m2" START CLKNET "clk_40" END CLKNET "clk_20"  12.5 ns;

These 2 preferences would tell par that the path from a ff clocked with
clk_20 must make it to it's destination ff (clocked with clk_40) in 12.5
ns
(including clk-out and setup) and vice versa.

MULTICYLE also requires that a FREQUENCY or PERIOD preference is
specified
for each clock.

Also you could specify any value in ns if you wanted to relax those
paths
or additionally you could specify <n> x where <n> is the number of
destination
clock cycles to relax the constraint.

Hope this helps.

Bob Wagner
Lucent Technologies
New England FPGA/FPSC FAE


"trlcoms(news)" wrote:
> 
> Does anyone know how to specify the relationship between 2 clocks with in an
> orca3t125
> using orca 935 tools?
> The clks are 20 MHz and 40 MHz , the 20 MHz changes on the falling edge of the
> 40 MHz.
> 
> Thanks
> 
>             Jas
Article: 19761
Subject: Re: XC4000 Configuration Bitstream structure
From: "George" <g_roberts75@hotmail.com>
Date: Tue, 11 Jan 2000 16:00:59 -0000
Links: << >>  << T >>  << A >>
Hi,

I heard about a JAVA based program called JBITS which edits bitstream for
XC4000. How did the guy, who wrote this program, manage to do that if he did
not know about the structure of XC4000 bitsream? I think his name is Dr.
Guccione. Does he work for Xilinx? What do we need in order to get this
information from Xilinx?. As I said, sometimes, FPGA architectures are
totally floorplanned and they do not need (at least from an abstract point
of view) so much extra knowledge to be described in bitstream, so why bother
use Xilinx tools for something obsolete really??.




Article: 19762
Subject: Re: HW resources increased
From: jonathan@oxfordbromley.u-net.com (Jonathan Bromley)
Date: Tue, 11 Jan 2000 16:16:54 GMT
Links: << >>  << T >>  << A >>
On 11 Jan 2000 15:14:20 GMT, peter@abbnm.com (Peter da Silva) wrote:

>There's got to be some fundamental change in the OS design feild to really
>take advantage of the sorts of hardware resources we've got these days. I
>don't know what that change will be... companies like Be talk about being
>some kind of new paradigm but they don't really seem to be able to explain
>just what that new paradigm is.

Well, that just might be because no new paradigm is required.  It
would just be kinda neat if the turkeys that brought us WinXXXX
had read a few operating system and parallel programming textbooks
before they started, so that they could see how all the problems
that now plague us on Wintel were solved decades ago by people
who thought for their living.

Hurrumph.

Jonathan Bromley

Article: 19763
Subject: Re: Lucent Orca designs
From: elynum@my-deja.com
Date: Tue, 11 Jan 2000 16:31:45 GMT
Links: << >>  << T >>  << A >>
Is there a free sample of the Lucent tools? Like for 30 days or
something?


Sent via Deja.com http://www.deja.com/
Before you buy.
Article: 19764
Subject: Re: XC4000 Configuration Bitstream structure
From: Ray Andraka <randraka@ids.net>
Date: Tue, 11 Jan 2000 17:06:26 GMT
Links: << >>  << T >>  << A >>
Steve Guccione works for Xilinx.  He and Delon Levi (also a Xilinx employee) are
the guys with J-Bits.  I don't know if it is openly available or not however.

George wrote:

> Hi,
>
> I heard about a JAVA based program called JBITS which edits bitstream for
> XC4000. How did the guy, who wrote this program, manage to do that if he did
> not know about the structure of XC4000 bitsream? I think his name is Dr.
> Guccione. Does he work for Xilinx? What do we need in order to get this
> information from Xilinx?. As I said, sometimes, FPGA architectures are
> totally floorplanned and they do not need (at least from an abstract point
> of view) so much extra knowledge to be described in bitstream, so why bother
> use Xilinx tools for something obsolete really??.

--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email randraka@ids.net
http://users.ids.net/~randraka


Article: 19765
Subject: Re: XC4000 Configuration Bitstream structure
From: nweaver@ribbit.CS.Berkeley.EDU (Nicholas C. Weaver)
Date: 11 Jan 2000 17:27:02 GMT
Links: << >>  << T >>  << A >>
In article <387B6136.13A02A43@ids.net>, Ray Andraka  <randraka@ids.net> wrote:

> Steve Guccione works for Xilinx.  He and Delon Levi (also a Xilinx
> employee) are the guys with J-Bits.  I don't know if it is openly
> available or not however.

	Jbits is not publically available on the web, but I believe
you can ask for a copy at the email on this location
(http://www.xilinx.com/xilinxonline/index.htm) and they will email you
back on where to get a copy and the decryption key to unpack it.
-- 
Nicholas C. Weaver                                 nweaver@cs.berkeley.edu
Article: 19766
Subject: Re: HW resources increased
From: ian@five-d.com (Ian Kemmish)
Date: 11 Jan 2000 18:14:10 GMT
Links: << >>  << T >>  << A >>
In article <387AD481.58C40A0C@ieee.org>, Khatib@ieee.org says...

>SW programmers now also have high speed processors, large memories
>advanced compilers and visual tools. although all these resources are
>available for programmers, but -as I see- they do not improve their sw
>_in_the_same_ratio_as_the improvements_of_the_resources. For example all
>new sw versions need larger memories and faster processors without the
>increase of the functionality of the new version. This is because they

This is simply not true.  The latest version of Jaws was smaller than the last 
previous version, and incpororated PostScript LanguageLevel 3 functionality.  
Some people (and especially marketing departments and managers) feel small code 
size is not important, but smaller programs do get fewer cache misses and TLB 
misses, and depending on what you're running, this can make a perceptible 
difference to performance.

Of course, economics being what it is, one only gets the opportunity for a 
major code squeeze once or twice a decade.  But it is fun to revisit old code 
once in a while:-)

>Do you think like me? Do you know how can we prevent this?
>I think this can be prevented by following the Open Source and open
>Hardware design concepts in the design. You can read more about this
>idea in OpenIPCore Project at http://www.openip.org/oc

Open Source is a large part of the problem, not part of the solution.

Compact, fast and robust code is produced by Real Programmers, who either work 
alone, or, as described in The Mythical Man Month, with a co-pilot.  Ideally, 
the Real Programmer develops on the slowest machine in the building and the 
co-pilot tracks down bugs on the fastest machine in the building:-)

Activities like Open Source, on the other hand, have a cast of thousands, and a 
strong economic incentive to ship many flakey new versions and fix them on a 
time and materials basis later on.  If you want to see what results when you 
apply a cast of thousands to an otherwise simple problem, look at the ICL 2900 
(my own choice for computer of the century, if only because we should never 
forget the Great Disasters Of Computing).


- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Ian Kemmish                   18 Durham Close, Biggleswade, Beds SG18 8HZ, UK
ian@five-d.com                Tel: +44 1767 601 361
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Behind every successful organisation stands one person who knows the secret
of how to keep the managers away from anything truly important.

Article: 19767
Subject: Re: Lucent Orca designs
From: husby@fnal.gov (Don Husby)
Date: Tue, 11 Jan 2000 19:20:29 GMT
Links: << >>  << T >>  << A >>
elynum@my-deja.com wrote:
> Is there a free sample of the Lucent tools? Like for 30 days or
> something?

Yes.  See:
http://www.lucent.com/micro/fpga/foundry935/foundrydown.html

This gives you free use of the back-end (Place and route) software
for their "smaller" parts (up to 55K gates).  It accepts EDIF netlists
from most front-ends, including Viewlogic and Leonardo.



--
Don Husby <husby@fnal.gov>             http://www-ese.fnal.gov/people/husby
Fermi National Accelerator Lab                          Phone: 630-840-3668
Batavia, IL 60510                                         Fax: 630-840-5406
Article: 19768
Subject: Re: 100 MHz counters
From: "Kresten Nørgaard" <nospam_kresten_noergaard@ddf.dk>
Date: Tue, 11 Jan 2000 20:25:53 +0100
Links: << >>  << T >>  << A >>

Andy Peters skrev i meddelelsen <85d9lj$18d7$1@noao.edu>...
>Kresten Nørgaard wrote in message <857h4h$96j$1@news.inet.tele.dk>...
>>Hi group!
>>I'm looking into a new design, consisting of 4 pcs. of  32-bit 100 MHz
>>asynchronous counters. When stopped, the counters are emptied into a FIFO
>>(common to all counters - 32 kbyte size total). The FIFO's will be read
>>through an ordinary 8 MHz CPU interface.
>
>Question: why an async counter?  Especially at 100 MHz?  you'd better off
>with a synchronous counter and some logic that generates count enables.


Quite right, but I figured, that I would need 4 "global" clocks to make 4
counters, and not all FPGA families features so many distributed clocks.

Another issue is power consumption. I reckoned, to lower the power
dissipation, if a chose a ripple counter, but I might be wrong on that?

Kresten


Article: 19769
Subject: Configuring virtex devices
From: Tom Leacock <tom@pavcal.com>
Date: Tue, 11 Jan 2000 14:41:00 -0500
Links: << >>  << T >>  << A >>

What is the best technique for ISP configuration of Virtex devices from
a PC, while allowing a XC1700 or XC1800 prom to be inserted later for
production?
Is a JTAG port for each component the simplest way?
I appreciate any suggestions.
-- Tom
----------------------------------------------
Thomas Leacock
Panasonic AVC American Laboratories (PAVCAL)
95 D Connecticut Dr.
Burlington NJ 08016-4180

Phone: 609-386-8600 ext.115
Fax:   609-386-4999

email: toml@pavcal.com
----------------------------------------------


Article: 19770
Subject: Re: 100 MHz counters
From: "Andy Peters" <apeters.Nospam@nospam.noao.edu.nospam>
Date: Tue, 11 Jan 2000 13:10:05 -0700
Links: << >>  << T >>  << A >>
Kresten Nørgaard wrote in message <85g02u$1cr$1@news.inet.tele.dk>...
>
>Andy Peters skrev i meddelelsen <85d9lj$18d7$1@noao.edu>...
>>Kresten Nørgaard wrote in message <857h4h$96j$1@news.inet.tele.dk>...
>>>Hi group!
>>>I'm looking into a new design, consisting of 4 pcs. of  32-bit 100 MHz
>>>asynchronous counters. When stopped, the counters are emptied into a FIFO
>>>(common to all counters - 32 kbyte size total). The FIFO's will be read
>>>through an ordinary 8 MHz CPU interface.
>>
>>Question: why an async counter?  Especially at 100 MHz?  you'd better off
>>with a synchronous counter and some logic that generates count enables.
>
>
>Quite right, but I figured, that I would need 4 "global" clocks to make 4
>counters, and not all FPGA families features so many distributed clocks.

Do you have four independent 100 MHz inputs?  If not, why not use one 100
MHz global clock to count events on the four "clock" inputs?

>Another issue is power consumption. I reckoned, to lower the power
>dissipation, if a chose a ripple counter, but I might be wrong on that?

Well, a ripple counter doesn't have everything toggling at once, so the PD
should decrease.  Getting a ripple counter to work at 100 MHz, however, is
another issue entirely.


-- a
-----------------------------------------
Andy Peters
Sr Electrical Engineer
National Optical Astronomy Observatories
950 N Cherry Ave
Tucson, AZ 85719
apeters (at) noao \dot\ edu

Spelling Counts!  You don't loose your money - you lose it.



Article: 19771
Subject: Re: HW resources increased
From: "Paul Butler" <c_paul_butler@yahoo.com>
Date: Tue, 11 Jan 2000 15:04:42 -0600
Links: << >>  << T >>  << A >>

Peter da Silva <peter@abbnm.com> asks in message
news:85fhcc$d7r@web.nmti.com...
>
> In what universe are they making the old stuff more reliable?
>

There is no shortage of bugs in any of the SW I use but it seems to me that
each new version seems to have a different set of bugs.  I take that to mean
that the old bugs are fixed.  I guess you don't expect new versions to fix
old bugs?

I admit I'm used to living with bugs.  EDA software is so buggy it's
unbelievable - it's like a soap bubble always on the verge of collapse.  My
OS is comparitively stable and I hardly blink when I have to reboot.  What
else am I going to do?  Grin and bear it.

From your response and others, I gather that you think SW only gets worse
with time?  Is there no trade-off between efficiency and reliability or
development time?  Does the idea of using tested cores not suggest bigger
but more stable HW design?

Paul Butler



Article: 19772
Subject: Re: HW resources increased
From: peter@abbnm.com (Peter da Silva)
Date: 11 Jan 2000 21:24:57 GMT
Links: << >>  << T >>  << A >>
In article <387b56ee.1931612@news.u-net.com>,
Jonathan Bromley <jonathan@oxfordbromley.u-net.com> wrote:
> Well, that just might be because no new paradigm is required.  It
> would just be kinda neat if the turkeys that brought us WinXXXX
> had read a few operating system and parallel programming textbooks
> before they started, so that they could see how all the problems
> that now plague us on Wintel were solved decades ago by people
> who thought for their living.

It's something in the water in Washington State. You get people thinking
Starbucks is good coffee and OS architects spontaneously forget everything
they learned from RSX-11 and VMS.

-- 
In hoc signo hack, Peter da Silva <peter@baileynm.com>
 `-_-'   Ar rug tú barróg ar do mhactíre inniu? 
  'U`
         "I *am* $PHB" -- Skud.
Article: 19773
Subject: Re: PCI/USB project started
From: mcjy@my-deja.com
Date: Tue, 11 Jan 2000 22:02:06 GMT
Links: << >>  << T >>  << A >>
 Hi!
 For the PCI core project, I would
 expect it take six months to get
 a basic model. (Depending on how many
 people.  My estimation is 5 people)
 It sounds like a very long time but
 organising a project on the internet
 is very different from normal project.
 All of us have a full time job so
 we can only spend around 6 to 10 hours
 each week.

 I guess we can use one month to develop
 the block diagrams and configuration
 of the project (as you know, we will need
 sometime for people to feedback) and
 several generations of the block diagrams
 would be developed in this month.  Then
 we will focus on the detail spec of each
 block.  We might need to think of some
 of the interaction between blocks in the
 early planning stage, but it really depends
 on situation.

 Then we will need two month at least to
 develop the block and then combine them
 and carry out testing in simulation.
 At this moment I still wonder how we
 can test our design.  I believe that the
 prototyping board from Altera should be
 very useful but if we want to do the
 test in real circuit we will also need
 to develop device driver.

 About the PCI specification :
 I think we will work on PCI 2.2
 64 bit extension is only one of the feature
 support in PCI 2.x, it is ok to develop
 32 bit core for PCI 2.2 (and I think it would
 be much easier to start with).

 PCI specification said a device should
 be able to run from 66MHz to 0MHz.
 But the hardware performance requirment of
 66MHz system maybe too high (difficult to
 prototype), so maybe we need to work on 33MHz
 design first.

 PCI-X is 133MHz PCI.  I don't know if its
 specification is released yet. It is develop
 by Compag, and a few other PC companies.
 WIth current performance in FPGA, I guess
 it is almost impossible for us to prototype.

 Compact PCI is based on PCI 2.x and it support
 hot swap.  It have extra requirment on the
 pins implementation (the pins must be tristate
 during the hot swap process). It is most likely
 to be used in industrial measurement/control
 systems. (to replace someting called VME bus?
  not sure about the correct name...)

 I haven't look at USB yet.

 Have a nice day.

 from
 Joe




Sent via Deja.com http://www.deja.com/
Before you buy.
Article: 19774
Subject: Re: Altera Flex10K bitstream compatibility ?
From: ying@soda.CSUA.Berkeley.EDU (Ying C.)
Date: 11 Jan 2000 22:37:50 GMT
Links: << >>  << T >>  << A >>


Nothing will happen. 10K20 simply can't be configured with 10K10 bitstream.
Since they are pin to pin compatible, the device will try to configure and then get
stuck in the error out mode. (ie. nStatus goes high)

Ying

In article <387B0CFA.FBFC56FE@dotcom.fr>,
Nicolas Matringe  <nicolas@dotcom.fr> wrote:
>Hi
>One of our engineers who doesn't know much about FPGAs just told me "All
>right(, I replaced the 10K10 with a 10K20, this should work fine". I
>stopped him, telling that it's not because they are pin to pin
>compatible that they can be programmed with the same bitstream.
>Any of you knows what would happen if we tried? Just curious...
>
>Nicolas MATRINGE           DotCom S.A.
>Conception electronique    16 rue du Moulin des Bruyeres
>Tel 00 33 1 46 67 51 11    92400 COURBEVOIE
>Fax 00 33 1 46 67 51 01    FRANCE




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search