Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 81950

Article: 81950
Subject: Re: ISE
From: Jim George <send_no_spam_to_jimgeorge@gmail.com>
Date: Tue, 05 Apr 2005 02:21:02 -0600
Links: << >>  << T >>  << A >>
mmkumar@gmail.com wrote:
> hi,
>   in ise, the console window says "synthesis completed" but in the
> process for source, it still shows a question mark instead of an
> exclamatory mark(excla.. mark for synthesis complete).ans when u click
> sysnthesis report , it start the synthesis process all over again..if
> any one knows,pls let me know.
> 
> ~Mack.
> 

I've seen this happen occasionally (ISE 6.3) but performing a dependent 
action (like translate after synthesize) does not cause re-synthesis, so 
I dont usually care.

Article: 81951
Subject: Re: IBUFG and BUFG +xilinx
From: Jim George <send_no_spam_to_jimgeorge@gmail.com>
Date: Tue, 05 Apr 2005 02:25:45 -0600
Links: << >>  << T >>  << A >>
williams wrote:
> Hello Guys,
> I had a doubt about the IBUFG and BUFG in xilinx.
> 1.I have connected clock from oscillator to CLKG IO of the Xilinx. In
> this case is it required to instantiate the IBUFG inside my code
> also?.
> 2. The DCM output is already BUFG i think  and so is it required to
> BUFG again in my code?
> 
> Thanks and regards
> Williams

If you manually insert BUFGs and IBUFGs, the tools will not try to 
insert another one, so put them in to make sure. Otherwise you can find 
that later on, when your design becomes more dense, your clock can 
suddenly be put onto longlines or even local routing.

-Jim

PS, I think your post belongs only on comp.arch.fpga, the others are for 
language specific questions.

Article: 81952
Subject: Re: Open PowerPC Core?
From: "Alex Freed" <alexf@mirrow.com>
Date: Tue, 5 Apr 2005 01:26:58 -0700
Links: << >>  << T >>  << A >>

"Ziggy" <Ziggy@TheCentre.com> wrote in message
news:XYS3e.131292$r55.32410@attbi_s52...
> Eric Smith wrote:
>. A reproduction of a 486 or base Pentium would
> be plenty for what i want to do.

Not being a top authority on soft core I'll still observeve that:

1. Implementing a CISC CPU is much more resource consuming than implementing
a RISC core.
2. x86 is way crazy because of the need to maintain compatibility with the
8086's real mode.

In late 80's Intel made a special version of 386 (385 if I remember right)
that was basically a 368 without  the real mode.
It was much cheaper than a 386 but there were no takers: x86 is used so much
only because of the huge volume of written code,
not because it is a good architecture.
If I had to go the CISC way, I'd much rather clone a 68000. Just as much
software written and a considerably better
instruction set.


-- 
-Alex.



Article: 81953
Subject: Re: Stupid question
From: Jim George <send_no_spam_to_jimgeorge@gmail.com>
Date: Tue, 05 Apr 2005 02:27:36 -0600
Links: << >>  << T >>  << A >>
Thomas Womack wrote:
> Is there any way of using the Xilinx toolchain on a Mac?
> 
> I have become spoiled by my Mac Mini, and unpacking my loud PC
> just to run place-and-route seems inelegant.
> 
> Tom
> 

Try keeping your noisy PC elsewhere on a network and use Virtual Desktop 
under Virtual PC (eek!!!)

-Jim

Article: 81954
Subject: Re: Searching for Vision Concavity Algorithm
From: Jonathan Bromley <jonathan.bromley@doulos.com>
Date: Tue, 05 Apr 2005 09:38:51 +0100
Links: << >>  << T >>  << A >>
On Mon, 4 Apr 2005 12:44:58 -0700, "Brad Smallridge"
<bradsmallridge@dslextreme.com> wrote:

>I have lots of little objects being fed to me one pixel at a time.
>It's a line scan sorting operation with mutltiple ejectors.

OK.  Since you get pixel-by-pixel, there are (I think) some 
iterative tricks that are worth trying.  The idea I sketch
out below is really line-by-line rather than pixel-by-pixel,
and it assumes you've run-length encoded each line (i.e.
you know the X coordinates of all the edges on the line).

(When I say "X" I mean the direction of camera scan;
"Y" is the direction of travel of the belt. YMMV!)

>I have an algorithm now that does blob labeling.  I am thinking
>that as the blob grows, the rate that pixels are added to the
>left and right side should first increase, then decrease. If I
>see increase, decrease, and increase, that might indicate a
>convexity, and should be fairly simple to detect.  At least this
>is what I am thinking today.

I don't think it's quite that easy.  You can get a concavity
on an edge whose gradient is always positive...

***                     '*' = object, '.' = concavity
****..
*****....
******.....
*************
**************

>I do need the concavity information because it has proven
>so far to be the best way to determine where to segment
>the objects that are touching.  The standard erosion/dilation
>techniques only separate some of the objects.

OK.  Here's what I believe you do...  Let's talk about the 
right-hand side of the object - left-hand side is processed
exactly the same way, separately.

Keep track of the end of EACH AND EVERY line.  This "end"
is effectively a vertex of the object.  What's more, add
a single-bit flag to each of these line ends saying
"this vertex is on the convex hull".  By the time you're 
done, any vertex WITHOUT the marker is part of a concavity.
Start with the right edge of the topmost line; this is 
obviously on the convex hull, so it gets a marker.

As you get each new vertex, mark it as being on the 
convex hull (it obviously is so at this stage, because
it's at the bottom).  Calculate its gradient (in fact 
I think you want the inverse gradient, dX/dY) back 
to the nearest vertex that HAS the convex-hull marker.
First time around, this is sure to be the previous line.
Then calculate (or look at a stored copy of) the gradient
from THAT point back to the PREVIOUS point on the convex
hull.  If the new gradient is larger, then the edge
has "turned outwards" and the previous point is no longer 
on the convex hull.  Delete its marker and re-try the 
gradient comparison, to the *next* point back.  So
you go on until either you've reached a convexity or
you hit the top.

When the object is closed-off, you now have a list of
all its vertices and each vertex is labelled to say
whether it's on or off the convex hull.  Vertices off
the hull mark one boundary of a concavity; the outer
(hull-side) limit of the concavity can be determined
by interpolating the hull between pairs of on-hull
vertices.

>I can remove holes with a filter if that becomes a problem.

It isn't; I just wondered whether you *needed* the holes.
When I did the fish-finder ten years ago, I was given some
software that used the Lumia/Shapiro algorithm to label 
blobs.  It was dog-slow.  I re-wrote it and came up with
a fast labelling algorithm that builds a tree structure for
each object - top of the tree is the object itself, 
next level of hierarchy is one record per hole, then 
each hole may have another level for objects wholly in
the hole (!), etc.  Given that sort of scheme, it's
trivial to ignore holes - you just throw away anything
in the hierarchy below the top-level object.

However, the algorithm I developed wouldn't make much sense 
in FPGA because it maintains dynamically allocated and linked
data structures.  I'd be very interested to know how you 
did it in hardware.

>So are you doing vision now?

No longer.  But I still think it's about the most fun
you can have in electronics without exciting too much 
attention from the authorities :-)
-- 
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL, Verilog, SystemC, Perl, Tcl/Tk, Verification, Project Services

Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, BH24 1AW, UK
Tel: +44 (0)1425 471223          mail:jonathan.bromley@doulos.com
Fax: +44 (0)1425 471573                Web: http://www.doulos.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.

Article: 81955
Subject: Re: Open PowerPC Core?
From: David <david.nospam@westcontrol.removethis.com>
Date: Tue, 05 Apr 2005 11:40:16 +0200
Links: << >>  << T >>  << A >>
On Tue, 05 Apr 2005 09:46:07 +0200, Antti Lukats wrote:

> 
> "David" <david.nospam@westcontrol.removethis.com> schrieb im Newsbeitrag
> news:pan.2005.04.05.07.04.46.345000@westcontrol.removethis.com...
>> On Mon, 04 Apr 2005 11:48:09 -0700, Eric Smith wrote:
>>
>> > Tobias Weingartner wrote:
>> >> I doubt it's a matter of patents, but more a matter of licening.  The
> two
>> >> are very different beasts.
>> >
>> > But if there isn't a patent on an architecture, you don't need a license
>> > to implement it.  The purpose of the license is to grant you a right
> that
>> > was taken away from the patent.  If there's no patent, you haven't been
>> > denied the right.
>>
>> Since this topic has come up, maybe someone could answer this for me:
>>
>> I've seen publicly available (often open source) cores for other
>> processors, such as the AVR.  Are these sort of cores legal to make,
>> distribute and use?  Supposing I made (from scratch) an msp430 compatible
> 
> its done full soc based on MSP430 compatible core :)
> http://bleyer.org/
> 

I'd have been surprised if it hadn't been done, given that the msp430 core
is a solid 16-bit core with a good gcc port and a (relatively) clean
instruction set and programming model.  I was, however, more interested in
knowing where such a core stands legally (although I will also have a look
at the core sometime for curiosity - and the site you gave has a few other
interesting links).

> 
>> core for an FPGA - any ideas whether that would be legal or not?  I'm
>> guessing that using the name "msp430" would be a trademark and/or
>> copyright violation, but if there are no patents involved it should be
>> okay?  Does it make any difference whether it is just used by the
>> developer, released as an inaccessible part of a closed design, or whether
>> it is released for free use by others?
>>
>> mvh.,
>>
>> David
>>


Article: 81956
Subject: Re: Need Help
From: "Neo" <zingafriend@yahoo.com>
Date: 5 Apr 2005 03:02:46 -0700
Links: << >>  << T >>  << A >>
for inferring bram see the code below-

----code------------------------------
library ieee;
use ieee.std_logic_1164.all;
use ieee.std_logic_arith.all;
use ieee.std_logic_unsigned.all;

entity sram is
        port(
                clock:  in std_logic;
                enable: in std_logic;
                rwbar:  in std_logic; -- (1 - Read, 0 - Write)
                addr:   in std_logic_vector(4 downto 0);
                data:   inout std_logic_vector(15 downto 0)
        );
end sram;

architecture sram_arch of sram is
type ram_type is array (0 to 31) of std_logic_vector(15 downto 0);
signal tmp_ram: ram_type;
begin
process(clock)
begin
  if(clock='1' and clock'event) then
    --if(enable='1') then
      if(rwbar='1') then
        data <= tmp_ram(conv_integer(addr));
      else
       data <= (others => 'Z');
      end if;
  end if;
end process;
process(clock)
begin
  if(clock='1' and clock'event) then
    --if(enable='1') then
      if(rwbar='0') then
        tmp_ram(conv_integer(addr)) <= data;
      end if;
  end if;
end process;    
end sram_arch;


Article: 81957
Subject: Re: can c++ code be loaded to a hardware PGA coprocessor card
From: "Simon Peacock" <nowhere@to.be.found>
Date: Tue, 5 Apr 2005 22:12:00 +1200
Links: << >>  << T >>  << A >>
One extra point.. it might be possible to implement C but less likely is
C++.. C++ supports abstract concepts which are difficult to put into
hardware.  The trick of using VHDL or Verilog is often thinking "how would I
implement this in logic?"  so abstract is out the door immediately.  C is
relatively good at low level so consider a FPGA as something akin to a
driver.

One seriously huge advantage a FPGA has over any processor is massive
parallelisation.. that is it can do many things simultaneously.  To an
extreme there are bruit force encryption breakers that simply try every
combination of key in parallel!!  This is where you gain... If you are
considering the FPGA as a peripheral to a processor than it will most likely
run as fast as the processor can give it data and take back results.  But
give an FPGA freedom and it will direct convert a 1.5 Ghz radio and not
break a sweat.

I would have to suggest that you look at Verilog or VHDL... for starters..
they support parallelism ... next they are typed languages and force
casting. Then there aren't any pointers :-) or gotos
There are simulators for VHDL and Verilog which will allow you to see what
you are doing... Otherwise what you write in C.. which is a sequential
language ... may not be what actually happens... VHDL Processes which
roughly translate to functions (not well) in C will all execute at the same
time.  C only handles this by kicking of threads and a C program with 200
threads might not be considered manageable by some.. but that's what VHDL
does! and C's support of semaphores would make a hardware designer cringe.

Simon


"JJ" <johnjakson@yahoo.com> wrote in message
news:1112666920.468559.68270@l41g2000cwc.googlegroups.com...
> I'd agree, the PCI will kill you 1st, and any difficult for FPGA but
> easy on the PC will kill you again, and finally C++ will not be so fast
> as HDL by my estimate maybe 2-5x (my pure prejudice). If you must use C
> take a look at HandelC, at least its based on Occam so its provably
> able to synthesize into HW coz it ain't really C, just looks like it.
>
> If you absolutely must use IEEE to get particular results forget it,
> but I usually find these barriers are artificial, a good amount of
> transforms can flip things around entirely.
>
> To be fair an FPGA PCI could wipe out a PC only if the problem is a
> natural, say continuously processing a large stream of raw data either
> from converters or special interface and then reducing it in some way
> to a report level. Perhaps a HD could be specially interfaced to PCI
> card to bypass the OS, not sure if that can really help, getting high
> end there. Better still if the operators involved are simple but occur
> in the hundreds atleast in parallel.
>
> The x86 has atleast a 20x starting clock advantage of 20ops per FPGA
> clock for simple inline code. An FPGA solution would really have to be
> several times faster to even make it worth considering. A couple of
> years ago when PCI was relatively faster and PC & FPGAs relatively
> slower, the bottleneck would have been less of a problem.
>
> BUT, I also think that x86 is way overrated atleast when I measure nos.
>
> One thing FPGAs do with relatively no penalty is randomized processing.
> The x86 can take a huge hit if the application goes from entirely
> inside cache to almost never inside by maybe a factor of 5 but depends
> on how close data is  temporally and spatially..
>
> Now standing things upside down. Take some arbitrary HW function based
> on some simple math that is unnatural to PC, say summing a vector of
> 13b saturated nos. This uses less HW than the 16b version by about a
> quarter, but that sort of thing starts to torture x86 since now each
> trivial operator now needs to do a couple of things maybe even perform
> a test and bra per point which will hurt bra predictor. Imagine the
> test is strictly a random choice, real murder on the predictor and
> pipeline.
>
> Taken to its logical extreme, even quite simple projects such as say a
> cpu emulator can runs 100s of times slower as a C code than as the
> actuall HW even at the FPGA leisurely rate of 1/20th PC clock.
>
> It all depends. One thing to consider though is the system bandwidth in
> your problem for moving data into & out of rams or buffers. Even a
> modest FPGA can handle a 200 plus reads / writes per clock, where I
> suspect most x86 can really only express 1 ld or st to cached location
> about every 5 ops. Then the FPGA starts to shine with 200 v 20/4 ratio,
>
> Also when you start in C++, you have already favored the PC since you
> likely expressed ints as 32b nos and used FP. If you're using FP when
> integer can work, you really stacked the deck but that can often be
> undone. When you code in HDL for the data size you actually need you
> are favoring the FPGA by the same margin in reverse. Mind you I have
> never seen FP math get synthesized, you would have to instantiate a
> core for that.
>
> One final option to consider, use an FPGA cpu and take a 20x
> performance cut and run the code on that, the hit might not even be 20x
> because the SRAM or even DRAM is at your speed rather than 100s slower
> than PC. Then look for opportunities to add a special purpose
> instruction and see what the impact of 1 kernal op might be. A example
> crypto op might easily replace 100 opcodes with just 1 op. Now also
> consider you can gang up a few cpus too.
>
> It just depends on what you are doing and whether its mostly IO or
> mostly internal crunching.
>
> johnjakson at usa dot com
>



Article: 81958
Subject: Structural vs Behavioral
From: "Hendra" <u1000393@email.sjsu.edu>
Date: 5 Apr 2005 03:32:57 -0700
Links: << >>  << T >>  << A >>
Both of the following 2 always blocks should synthesize to a decoder.

always @(address)
    case (address)
    2'b00 : row = 4'b0001;
    2'b01 : row = 4'b0010;
    2'b10 : row = 4'b0100;
    2'b11 : row = 4'b1000;
    endcase

always @(address)
  row = 2**address;

1. The first always block has 4 lines, and it will take much more lines
for wider address, but anybody who reads it notices immediately that
the code synthesizes to a decoder. The second always block only takes
one line, irrespective of the address' width. But it's harder to find
out what that line synthesize to. How do I know when to write my code
in "brute force" way (the first always code), or the "high level" way
(the second always block), which takes full advantage of HDLs with a
lot less lines? From all my previous projects, I always do the brute
force way. But sometimes I am wondering whether I am taking the right
approach. The brute force way seems like schematic, which can be done
as well by pulling components from the schematic library, which kind of
defeating the purpose of HDLs. On the other hand, if I use the high
level way, my code will take full advantage of HDLs, much shorter but
more difficult to debug. How do I know which one to choose?

3. When I do my projects, I always like to be able to see what my code
synthesize to (the schematic equivalent of the HDLs). That gives me
confident on the realibility of my projects. But should I forget about
the schematic equivalent and treat it just like C and as long as it
works then who cares about the schematic equivalent?

I need advise from professional engineers who have been in this field
for several years. I appreciate if you can help me making the decision.
Thank You!

Hendra


Article: 81959
Subject: WebPack_7.1 on Linux ?
From: habib bouaziz-viallet <habib@mynewserverposfix.com>
Date: Tue, 05 Apr 2005 13:09:15 +0200
Links: << >>  << T >>  << A >>
Hi all,

Xilinx claimed that their WebPack_ISE run under RHEL. Is anybody
successfully install it on common Linux distro ?

Many thanks, Habib
betula.fr

Article: 81960
Subject: DCM LOCKED as reset
From: manishr@softjin.com
Date: 5 Apr 2005 04:27:36 -0700
Links: << >>  << T >>  << A >>
Hi,
Can DCM's LOCKED o/p signal be used as reset within FPGA?
is this scheme feasible :-

PowerON Reset acts as DCM reset.
DCM's "Locked" signal, shifted by 1 SRL16  acts as reset for all the
functionality (say FSMs) within FPGA.
*FSM has active low reset

The possible need for this scheme:
When PowerON reset connected to DCM reset gets De-asserted, DCM starts
"loking" the clock. At the same time, if same PowerON reset is used as
reset, all the FFs in the functionality are reseted.

but while DCM is not "LOCKED", there are some clock pulses at the
output of DCM, which might be of variable "period".
So, just to avoid any "false trgiggering" of FSMs with these clock
pulses, "locked" signal will keep the FSM in "reset state".
when "Locked" signal goes high still it is delayed by 16clocks (1
SRL16). So when this reset gets removed, all the functionality will be
receiving "stable" DCM clock. 

Cheers
Manish


Article: 81961
Subject: Re: Structural vs Behavioral
From: Jonathan Bromley <jonathan.bromley@doulos.com>
Date: Tue, 05 Apr 2005 12:59:44 +0100
Links: << >>  << T >>  << A >>
On 5 Apr 2005 03:32:57 -0700, "Hendra" <u1000393@email.sjsu.edu>
wrote:

>Both of the following 2 always blocks should synthesize to a decoder.
>
>always @(address)
>    case (address)
>    2'b00 : row = 4'b0001;
>    2'b01 : row = 4'b0010;
>    2'b10 : row = 4'b0100;
>    2'b11 : row = 4'b1000;
>    endcase
>
>always @(address)
>  row = 2**address;

[So, which is nicer...]

Personally I like neither of these descriptions of a 
decoder;  how about the following:

always @(address) begin
  row = 0;
  row[address] = 1'b1;
end

This is likely to be efficient both for synthesis and for
simulation, and (at least to me) it is crystal-clear
that it represents a decoder, unlike "1<<address" or
(worse) "2**address" which looks like a bizarre piece
of arithmetic.  Yet it is completely scalable to any 
size of vector.

Now, I guess that you also have in mind some situations 
where it is much harder to find a representation in your
code that meets all three of the criteria you nicely pointed out:

- it should be easy to read - both its purpose, and the hardware
  structure it is intended to create, should be obvious
- it should scale easily to different sized vectors, etc
  (no cut-and-paste!)
- it should be effective, i.e. it should simulate efficiently
  and synthesise reliably to the logic you want

As you have clearly seen, when you write an HDL design you
are communicating on many levels: expressing your overall
design intent, describing a specific implementation, and
communicating your understanding to other human readers
(who may be operating on only a subset of these levels).
No-one ever said it was easy :-)

You have, it seems to me, already taken the most important
and most difficult step: you are asking yourself the right
questions.  All else is a combination of common sense, 
experience, and choosing something you are comfortable with.
How about asking the following questions each time you 
write something...

1) Can I add comments to the code that will document it 
   clearly enough so that I can understand it if I return
   to it a year later?
2) Could I explain it to my colleagues, without getting
   confused about my own intent?
3) What might I wish to change in the future?  Have I 
   done anything that would make that change unduly difficult?
4) Have I included a description of some unnecessary detail,
   that is not important to my purpose, and that the tools
   should sort out for themselves?
5) Could I have solved this problem using a pattern or style 
   of coding that is already used elsewhere in my organisation?
6) Have I done something that is clever or ingenious, just for
   the sake of doing it?  If so, how could I simplify it?

My $0.02-worth: if you continue to challenge your own choices
in the way you already are doing, then you won't go far wrong.
-- 
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL, Verilog, SystemC, Perl, Tcl/Tk, Verification, Project Services

Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, BH24 1AW, UK
Tel: +44 (0)1425 471223          mail:jonathan.bromley@doulos.com
Fax: +44 (0)1425 471573                Web: http://www.doulos.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.

Article: 81962
Subject: Re: Structural vs Behavioral
From: Michel Billaud <billaud@labri.u-bordeaux.fr>
Date: 05 Apr 2005 14:05:49 +0200
Links: << >>  << T >>  << A >>
"Hendra" <u1000393@email.sjsu.edu> writes:

> Both of the following 2 always blocks should synthesize to a decoder.
> 
> always @(address)
>     case (address)
>     2'b00 : row = 4'b0001;
>     2'b01 : row = 4'b0010;
>     2'b10 : row = 4'b0100;
>     2'b11 : row = 4'b1000;
>     endcase
> 
> always @(address)
>   row = 2**address;
> 
> defeating the purpose of HDLs. On the other hand, if I use the high
> level way, my code will take full advantage of HDLs, much shorter but
> more difficult to debug. How do I know which one to choose?

Well the third way juste requires a few extra bytes :

 always @(address)  // that's a decoder
   row = 2**address;

:-)


MB

-- 
Michel BILLAUD                  billaud@labri.fr
LABRI-Université Bordeaux I     tel 05 4000 6922 / 05 5684 5792
351, cours de la Libération     http://www.labri.fr/~billaud
33405 Talence  (FRANCE)     

Article: 81963
Subject: Re: WebPack_7.1 on Linux ?
From: Marius Vollmer <marius.vollmer@uni-dortmund.de>
Date: Tue, 05 Apr 2005 14:12:59 +0200
Links: << >>  << T >>  << A >>
habib bouaziz-viallet <habib@mynewserverposfix.com> writes:

> Xilinx claimed that their WebPack_ISE run under RHEL. Is anybody
> successfully install it on common Linux distro ?

Yes, I have it running on both a recent SuSE and on Debian GNU/Linux
unstable.  I mostly use the command line, but the graphical stuff
seems to work as well.

There are some tricks to install it correctly; search this newsgroup
for them.

IIRC, you have to set DISPLAY=":0" (and not ":0.0") because of some
braindeadness somewhere in WindU.  Also, you might need to install
libXm.so.3, maybe from Lesstif.

Also, if your box has a IPv6 address configured for localhost, coregen
might have problems talking to the ISE shell, since ISE listens on the
IPv4 address but coregen tries to connect to the IPv6 address, or
something.  I charge this braindeadness to the IPv6 design.

Article: 81964
Subject: Re: One or two DLLs for a SDRAM controller?
From: "Marc Randolph" <mrand@my-deja.com>
Date: 5 Apr 2005 05:26:14 -0700
Links: << >>  << T >>  << A >>

Marius Vollmer wrote:
> "Marc Randolph" <mrand@my-deja.com> writes:
> [...]
> synchronized with the external system clock.  Since the P&R tools
> should know enough about the timing situation in the FPGA anyway,
they
> might even be able to determine the delay needed in the DLL
> statically, and there would be no need for a feedback loop and no
need
> to explicitely instantiate a DLL.  (Maybe future FPGAs and their
tools
> will use DLLs automatically if they are needed to meet the timing
> constraints?)

Certainly seems possible for a default case.  Just need a way to easily
override it for the exceptions.

> So, in the picture above, the FPGA is now properly part of the system
> clock domain, but the SDRAM still needs to be considered.  As I see
it
> now, this is not a problem of the delay of the signals between FPGA
> and SDRAM, but a problem of the delay between the oscillator and the
> SDRAM.  The SDRAM must be close enough to the FPGA so that they can
be
> in a single clock domain, in any case.  Right?

There are several ways to make sure that the clock arrives at the same
time for both the FPGA and the other external devices, so in my mind,
the main problem is the delay between the FPGA and the SDRAM - which is
usually a function of not only prop delay due to distance, but setup
time and clock-to-out as well, which can eat into your clock period as
it did for this application:

http://groups-beta.google.com/group/comp.arch.fpga/msg/f9a3bf9e509bcd66?dmode=source

(click on "Multi-FPGA PCB data aggregation?" at the top to get the
other msgs)

> > I've not read up on the S3 DCM, but I assume it is the same as the
> > in V2Pro, where you need to release the reset to the DCM *after*
the
> > clock is present at the DCM input.  As long as you're doing that, I
> > don't see a problem with it.
>
> I don't think I am doing this right now, but I will look into it.  I
> guess the usual trick is to use shift register (that is clocked
> directly by the oscillator) that shifts out a couple of ones and the
> sticks at zero, right?

Yep.  Just make sure the shifts add up to more than all the prop delays
(the prop delay across the chip on the way out to the SDRAM is probably
the longest).

[From your other posting]
> Ahh, yes.  I guess this is not specific to SDRAMs, right?  When you
> have multiple chips on your PCB that are driven by the same clock and
> are thus one big, multi-chip clock domain, you of course needs to
make
> sure that timing is met withing that big clock domain.

Correct - DRAM is just the most common example.

> I don't see yet why the use of one or two DLLs (as sketched in my
> original post) _guarantees_ that timing is met since I would right
now
> say that only the delay from SDRAM to FPGA has been taken into
> account.  The path from FPGA to SDRAM is just as critical, I think,
> and has not been considered.  In fact, I would say that one needs to
> use two separate clocks, one for sending to the SDRAM, and one for
> receiving.

Very correct.  But for the clock frequencies (atually, data-eye widths)
that most people deal with, you can get away with just using one.

> In essence, you need to give up the one big clock domain idea, and
use
> some kind of self clocking data pipe, one for every direction.

A number of RAM's do this, including QDR SRAM.  At even medium high
frequencies (150 MHz), the data-eye is so small (effectively 300 MHz)
on those devices that it would be very difficult to do as a part of a
system-synchronous design.

   Marc


Article: 81965
Subject: Quartus 5
From: Jedi <me@aol.com>
Date: Tue, 05 Apr 2005 13:33:43 GMT
Links: << >>  << T >>  << A >>
Any information regarding new Quartus 5.0 and NIOS II?


rick

Article: 81966
Subject: Re: Open PowerPC Core?
From: "Antti Lukats" <antti@openchip.org>
Date: Tue, 5 Apr 2005 15:36:18 +0200
Links: << >>  << T >>  << A >>

"Ziggy" <Ziggy@TheCentre.com> schrieb im Newsbeitrag
news:Yww4e.13712$Vx1.3133@attbi_s01...
> Alex Freed wrote:
> > "Ziggy" <Ziggy@TheCentre.com> wrote in message
> > news:XYS3e.131292$r55.32410@attbi_s52...
> >
> >>Eric Smith wrote:
> >>. A reproduction of a 486 or base Pentium would
> >>be plenty for what i want to do.
> >
> >
> > Not being a top authority on soft core I'll still observeve that:
> >
> > 1. Implementing a CISC CPU is much more resource consuming than
implementing
> > a RISC core.
> > 2. x86 is way crazy because of the need to maintain compatibility with
the
> > 8086's real mode.
> >
> > In late 80's Intel made a special version of 386 (385 if I remember
right)
> > that was basically a 368 without  the real mode.
> > It was much cheaper than a 386 but there were no takers: x86 is used so
much
> > only because of the huge volume of written code,
> > not because it is a good architecture.
> > If I had to go the CISC way, I'd much rather clone a 68000. Just as much
> > software written and a considerably better
> > instruction set.
> >
> >
> But in todays world, does anything actually use
> the 'real mode' on an x86 chip?
>
> Though i do agree that the 68k is a much better
> chip, the x86 has a larger 'generic' software
> base.
>
> I think the 68k has been done however.. I just
> dont remember where i saw that at.

something is at opencores not sure how useabe it is

antti




Article: 81967
Subject: Re: Open PowerPC Core?
From: Ziggy <Ziggy@TheCentre.com>
Date: Tue, 05 Apr 2005 13:38:00 GMT
Links: << >>  << T >>  << A >>
Alex Freed wrote:
> "Ziggy" <Ziggy@TheCentre.com> wrote in message
> news:XYS3e.131292$r55.32410@attbi_s52...
> 
>>Eric Smith wrote:
>>. A reproduction of a 486 or base Pentium would
>>be plenty for what i want to do.
> 
> 
> Not being a top authority on soft core I'll still observeve that:
> 
> 1. Implementing a CISC CPU is much more resource consuming than implementing
> a RISC core.
> 2. x86 is way crazy because of the need to maintain compatibility with the
> 8086's real mode.
> 
> In late 80's Intel made a special version of 386 (385 if I remember right)
> that was basically a 368 without  the real mode.
> It was much cheaper than a 386 but there were no takers: x86 is used so much
> only because of the huge volume of written code,
> not because it is a good architecture.
> If I had to go the CISC way, I'd much rather clone a 68000. Just as much
> software written and a considerably better
> instruction set.
> 
> 
But in todays world, does anything actually use
the 'real mode' on an x86 chip?

Though i do agree that the 68k is a much better
chip, the x86 has a larger 'generic' software
base.

I think the 68k has been done however.. I just
dont remember where i saw that at.

Article: 81968
Subject: Re: Reverse engineering ASIC into FPGA
From: "Pete Fraser" <pfraser@covad.net>
Date: Tue, 5 Apr 2005 06:59:47 -0700
Links: << >>  << T >>  << A >>
"Neo" <zingafriend@yahoo.com> wrote in message 
news:1112686978.009999.201120@g14g2000cwa.googlegroups.com...
> currently we are doing one such assignemnt for a client. They want to
> do a board respin and wanted us to replace the few asics in there with
> fpga's. but fortunately they are not complex but the process sucks.
> less or no documentation or its in some foreign language, crazy!! and
> nothing for reference except the working board. so its like code,
> debug,debug,debug...until you get it right on the screen.
>
How do you go about quoting that, or is it by the hour?

If it's by the hour, how do you even give a vague estimate? 



Article: 81969
Subject: Re: RAMB16_S9
From: Ann <ann.lai@analog.com>
Date: Tue, 5 Apr 2005 07:09:47 -0700
Links: << >>  << T >>  << A >>
Hi, I am simulating using ModelSim. I saw that WE do get "0" and the address coming in is changing, but still data out is always 0 for some reason :-\ Thanks, Ann

Article: 81970
Subject: Re: Open PowerPC Core?
From: "Tim" <tim@rockylogic.com.nooospam.com>
Date: Tue, 5 Apr 2005 15:14:01 +0100
Links: << >>  << T >>  << A >>
"David" wrote ...

>>
>> its done full soc based on MSP430 compatible core :)
>> http://bleyer.org/
>>
>
> I'd have been surprised if it hadn't been done, given that the msp430 core
> is a solid 16-bit core with a good gcc port and a (relatively) clean
> instruction set and programming model.  I was, however, more interested in
> knowing where such a core stands legally (although I will also have a look
> at the core sometime for curiosity - and the site you gave has a few other
> interesting links).


And the msp430 is a simplified version of the pdp-11.  Unlicensed? 



Article: 81971
Subject: Protection measurements
From: "Markus Blank" <ernte23@gmx.at>
Date: Tue, 5 Apr 2005 16:47:51 +0200
Links: << >>  << T >>  << A >>
Hy

Lets assume we run a C program on the FPGA. This program contains a loop 
which executes EXACTLY n times. How can I make sure that this value n isnt 
changed in the memory by an attacker? Is there something like temper 
resistant memory available? Or what other suggestions come into question?

Thanks for any comments

Markus 



Article: 81972
Subject: Re: RAMB16_S9
From: Paul Hartke <phartke@Stanford.EDU>
Date: Tue, 05 Apr 2005 07:57:58 -0700
Links: << >>  << T >>  << A >>
Make sure you are correctly using glbl.v:
http://www.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=6537

This is sometimes tricky to get right because of the various timescale
in each module.  However, look at the waveforms and make sure that you
wait long enough for glbl.GSR to go low so the BRAM output registers
actually pass the data out of the module. 

Paul

Ann wrote:
> 
> Hi, I am simulating using ModelSim. I saw that WE do get "0" and the address coming in is changing, but still data out is always 0 for some reason :-\ Thanks, Ann

Article: 81973
Subject: Re: Open PowerPC Core?
From: Ziggy <Ziggy@TheCentre.com>
Date: Tue, 05 Apr 2005 15:04:47 GMT
Links: << >>  << T >>  << A >>
Antti Lukats wrote:
> "Ziggy" <Ziggy@TheCentre.com> schrieb im Newsbeitrag
> news:Yww4e.13712$Vx1.3133@attbi_s01...
> 
>>Alex Freed wrote:
>>
>>>"Ziggy" <Ziggy@TheCentre.com> wrote in message
>>>news:XYS3e.131292$r55.32410@attbi_s52...
>>>
>>>
>>>>Eric Smith wrote:
>>>>. A reproduction of a 486 or base Pentium would
>>>>be plenty for what i want to do.
>>>
>>>
>>>Not being a top authority on soft core I'll still observeve that:
>>>
>>>1. Implementing a CISC CPU is much more resource consuming than
> 
> implementing
> 
>>>a RISC core.
>>>2. x86 is way crazy because of the need to maintain compatibility with
> 
> the
> 
>>>8086's real mode.
>>>
>>>In late 80's Intel made a special version of 386 (385 if I remember
> 
> right)
> 
>>>that was basically a 368 without  the real mode.
>>>It was much cheaper than a 386 but there were no takers: x86 is used so
> 
> much
> 
>>>only because of the huge volume of written code,
>>>not because it is a good architecture.
>>>If I had to go the CISC way, I'd much rather clone a 68000. Just as much
>>>software written and a considerably better
>>>instruction set.
>>>
>>>
>>
>>But in todays world, does anything actually use
>>the 'real mode' on an x86 chip?
>>
>>Though i do agree that the 68k is a much better
>>chip, the x86 has a larger 'generic' software
>>base.
>>
>>I think the 68k has been done however.. I just
>>dont remember where i saw that at.
> 
> 
> something is at opencores not sure how useabe it is
> 
> antti
> 
> 
> 
I think you saw the 6800 core.. i dont think there is a 68000 core
unless i missed something, which is always possible.

Article: 81974
Subject: Re: Open PowerPC Core?
From: "Antti Lukats" <antti@openchip.org>
Date: Tue, 5 Apr 2005 17:12:36 +0200
Links: << >>  << T >>  << A >>
"Ziggy" <Ziggy@TheCentre.com> schrieb im Newsbeitrag
news:jOx4e.13997$Vx1.6361@attbi_s01...
> Antti Lukats wrote:
> > "Ziggy" <Ziggy@TheCentre.com> schrieb im Newsbeitrag
> > news:Yww4e.13712$Vx1.3133@attbi_s01...
> >
> >>Alex Freed wrote:
> >>
> >>>"Ziggy" <Ziggy@TheCentre.com> wrote in message
> >>>news:XYS3e.131292$r55.32410@attbi_s52...
> >>>
> >>>
> >>>>Eric Smith wrote:
> >>>>. A reproduction of a 486 or base Pentium would
> >>>>be plenty for what i want to do.
> >>>
> >>>
> >>>Not being a top authority on soft core I'll still observeve that:
> >>>
> >>>1. Implementing a CISC CPU is much more resource consuming than
> >
> > implementing
> >
> >>>a RISC core.
> >>>2. x86 is way crazy because of the need to maintain compatibility with
> >
> > the
> >
> >>>8086's real mode.
> >>>
> >>>In late 80's Intel made a special version of 386 (385 if I remember
> >
> > right)
> >
> >>>that was basically a 368 without  the real mode.
> >>>It was much cheaper than a 386 but there were no takers: x86 is used so
> >
> > much
> >
> >>>only because of the huge volume of written code,
> >>>not because it is a good architecture.
> >>>If I had to go the CISC way, I'd much rather clone a 68000. Just as
much
> >>>software written and a considerably better
> >>>instruction set.
> >>>
> >>>
> >>
> >>But in todays world, does anything actually use
> >>the 'real mode' on an x86 chip?
> >>
> >>Though i do agree that the 68k is a much better
> >>chip, the x86 has a larger 'generic' software
> >>base.
> >>
> >>I think the 68k has been done however.. I just
> >>dont remember where i saw that at.
> >
> >
> > something is at opencores not sure how useabe it is
> >
> > antti
> >
> >
> >
> I think you saw the 6800 core.. i dont think there is a 68000 core
> unless i missed something, which is always possible.

http://www.opencores.com/projects.cgi/web/k68/overview

68K
but as said I have not evaluated it, so not sure how useable it is

antti







Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search