Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 17025

Article: 17025
Subject: 100 Billion operations per sec.!
From: "Robert K. Veazey Jr." <rveazey@bellsouth.net>
Date: Fri, 25 Jun 1999 16:54:31 -0400
Links: << >>  << T >>  << A >>
I never heard of FPGA's until I recently read an article at CNN's
website <http://www.cnn.com/TECH/computing/9906/15/supercomp.idg/> that
said a company called Star Bridge Systems, Inc.
<http://www.starbridgesystems.com> has developed a revolutionary
computer using FPGA's that sits on a desktop & plugs into a 120v
standard outlet but outperforms IBM's fastest supercomputer called
Pacific Blue (which takes up 8,000sq.ft. floor space and uses something
like 3 Megawatts of power) by many times. They go on to say that this
company will be selling $1000 desktop computers in about 18 months that
are THOUSANDS of times faster than a 350PII Intel processor based
desktop. Here are my questions:

You guys(and gals) have been using FPGA's and programming for them for
some time now. What do all think of these claims?

Do you think it would be benificial to learn to program in their
proprietory language(called "Viva")? They will be offering an online
university for this purpose this fall with initial tutorials being free.
They claim that their technology is such a breakthrough that it will
literally replace most known types of AISC's and microprocessors
quickly.

Let me know what you think.

                            Thanks,

                                    Bob

Article: 17026
Subject: Re: 100 Billion operations per sec.!
From: Jonathan Feifarek <feifarek@removethis.ieee.org>
Date: Fri, 25 Jun 1999 16:58:24 -0600
Links: << >>  << T >>  << A >>
Vaporware?  I suspect they want a customer to fund the product so they
can start designing it.
A call to the Star Bridge Systems sales number resulted in an exchange I
would expect from a used car salesman.  Despite lack of any firm price,
the $1000 computer of the future currently *starts* at $2M (later
changed to a mere $500K) but can't be bought - only leased for $200K
down plus $8K/mo for a 3 year minimum.  But of course this depends on
which of many models you want, how many you want, etc., etc.
Known work using FPGAs in Custom Computing Machines routinely break Cray
speeds for a specific application.  This, coupled with scalability,
hardware reuse through reconfigurability, and massive parallelism in
execution make units such Gigaops (developed for comparing von Neumann
type architectures) misleading if not meaningless when applied to this
paradigm.  I suspect (especially after looking at the web site) that it
is precisely this type of measure that is being bantered about.
I don't dispute that FPGA based computing is 'blindingly fast' to be
subjective, but I believe a) this technology is currently evolutionary
rather revolutionary, and b) the afore-mentioned company has no lock on
the technology.  They have applied for a new patent  which may knock me
off my humble soapbox - but the existing patent cited on the web site
has not prevented competition from companies such as VCC, Gigaops, or
Annapolis Microsystems (who all have real products), and these companies
hold patents of their own pre-dating this one.
Jonathan
-- 
Jonathan F. Feifarek
Consulting and design
Programmable logic solutions

"Robert K. Veazey Jr." wrote:
> 
> I never heard of FPGA's until I recently read an article at CNN's
> website <http://www.cnn.com/TECH/computing/9906/15/supercomp.idg/> that
> said a company called Star Bridge Systems, Inc.
> <http://www.starbridgesystems.com> has developed a revolutionary
> computer using FPGA's that sits on a desktop & plugs into a 120v
> standard outlet but outperforms IBM's fastest supercomputer called
> Pacific Blue (which takes up 8,000sq.ft. floor space and uses something
> like 3 Megawatts of power) by many times. They go on to say that this
> company will be selling $1000 desktop computers in about 18 months that
> are THOUSANDS of times faster than a 350PII Intel processor based
> desktop. Here are my questions:
> 
> You guys(and gals) have been using FPGA's and programming for them for
> some time now. What do all think of these claims?
> ...
Article: 17027
Subject: Altera: Simulation results differ...
From: freund@leei.enseeiht.fr (Lars FREUND)
Date: Sat, 26 Jun 1999 01:55:49 +0200
Links: << >>  << T >>  << A >>
Hi,

I have some strange results when compiling my project for a
FLEX8000-FPGA with Altera Max+Plus II 9.21 (SunOS).

When compiling with the "Functional SNF Extractor" option, my simulation
results are correct.

When compiling with the "Timing SNF Extractor", all the routing, fitting
and so on, I get strange results. My LPM_ADD_SUB make nonsense. Data
stocked in Flipflops and used for counters "forget" the MSB. But not all
of them, only some of them. That happens even when not using "Carry
Chain" and so on.

I'll appreciate any help or comment!


Greetings,

Lars
Article: 17028
Subject: fast counter in 4013XL?
From: "Andy Peters" <apeters@noao.edu.NOSPAM>
Date: Fri, 25 Jun 1999 17:54:23 -0700
Links: << >>  << T >>  << A >>
Is it reasonable to assume that I can build a 12-bit counter in VHDL using
FPGA Express and make it run faster than 50 MHz in the -09 part?

given:

    counter : process (clk, reset)
    begin
        if reset = '1' then
            cnt <= (others => '0');
        elsif clk'event and clk = '1' then
            if load = '1' then
                cnt <= initreg;
            elsif cnten = '1' then
                cnt <= cnt + '1';
            end if;
        end if;
    end process counter;

load and cnten are both synchronous with the clock.  FPGA Express tells me
that it's barely over 20 ns for some of it.

Seems sorta silly that this can't work.  Looks like it's time to do it by
hand or use a logicore adder for the counters.

Time to go home and watch the knicks lose.

-- a
------------------------------------------
Andy Peters
Sr. Electrical Engineer
National Optical Astronomy Observatories
950 N Cherry Ave
Tucson, AZ 85719
apeters@noao.edu

NY Knicks in '99:
"Ya gotta believe!"


Article: 17029
Subject: Re: fast counter in 4013XL?
From: Ray Andraka <randraka@ids.net>
Date: Fri, 25 Jun 1999 22:04:46 -0400
Links: << >>  << T >>  << A >>
Andy,

This will generate two levels of logic.  Depending on the placement, you could
easily get performance even worse than 50MHz in a -09.  If you look at the
logic required for a loadable counter without the count enable, you'll see
that all four LUT inputs are used for each bit ( CIN ^ Q ) & (!LD) + (D &
LD).  The code you list below adds the count enable in the bit's equation
extending it to a 5 input function.  The synthesizer extends this to a second
level of logic rather than going to one bit per CLB.  If you place the second
level immediately adjacent to the counter you should do better, but placement
in VHDL is not for the faint of heart.

Instead of putting the count enable in the logic, you can use the Flip-flop's
count enable.  Check the xilinx website on the code style required to infer
it's use in FPGA express.  You'll also have to assert the clock enable when
the load input is active.  The other alternative is to use the carry in as a
count enable, although that is slower and can be a pain to code.



Andy Peters wrote:

> Is it reasonable to assume that I can build a 12-bit counter in VHDL using
> FPGA Express and make it run faster than 50 MHz in the -09 part?
>
> given:
>
>     counter : process (clk, reset)
>     begin
>         if reset = '1' then
>             cnt <= (others => '0');
>         elsif clk'event and clk = '1' then
>             if load = '1' then
>                 cnt <= initreg;
>             elsif cnten = '1' then
>                 cnt <= cnt + '1';
>             end if;
>         end if;
>     end process counter;
>
> load and cnten are both synchronous with the clock.  FPGA Express tells me
> that it's barely over 20 ns for some of it.
>
> Seems sorta silly that this can't work.  Looks like it's time to do it by
> hand or use a logicore adder for the counters.
>
> Time to go home and watch the knicks lose.
>
> -- a
> ------------------------------------------
> Andy Peters
> Sr. Electrical Engineer
> National Optical Astronomy Observatories
> 950 N Cherry Ave
> Tucson, AZ 85719
> apeters@noao.edu
>
> NY Knicks in '99:
> "Ya gotta believe!"



--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email randraka@ids.net
http://users.ids.net/~randraka


Article: 17030
Subject: Re: 100 Billion operations per sec.!
From: Ray Andraka <randraka@ids.net>
Date: Fri, 25 Jun 1999 22:23:49 -0400
Links: << >>  << T >>  << A >>
Marketing fluff, and not much else.  Reconfigurable computing is used today
by many on commercially available platforms as well as on proprietary
boards. The hardware is the easy part.  FPGAs routinely provide algorithmic
performance well beyond that possible with Von Neumann computers (heck, I've
built my business around that fact).  Right now, there are many very bright
people working on the problem of translating the algorithms into FPGA
programs to get the efficiency.  Fact is, it ain't all that easy to do.
First there is the issue of the algorithm itself, which more often than not
is not specified in a way that maps efficiently into hardware in general and
especially into FPGAs.  Second, there are still basic problems with
automatic placement that very often  prevent automatically generated designs
from reaching the performance the devices are capable of.  There is plenty
of room left for experts to work with a design to obtain considerably more
performance.  Finally, FPGAs do not make a very good general purpose
computer.  They are well suited for tasks that require the same or very
similar operation to be performed a large number of times -- video
processing, communications processing and the like.  They fall short when
they need to perform a large variety of tasks in a short period of time, as
there is alot of logic required for the variations.  Reconfiguration allows
some of that logic to be cached in cheap memory, but current devices take a
long time to reconfigure compared to the clock cycle the configured device
can run at.  The result is you wind up with a considerable down time for
reconfiguration, and if you need to reconfigura alot because of a diverse
set of tasks, you spend more time configuring than running.

As a price point, you might look at the selling prices of various FPGA
boards out there.  The cheapest ones are the better part of that $1000 price
tag, and those usually only have one FPGA on them.  Certainly not enough to
blow the doors off a Pentium 350 running non-specific random tasks!  The
general consensus currently seems to be that the computing in FPGAs is best
as a coprocessor rather than a replacement for the processor.

Robert K. Veazey Jr. wrote:

> I never heard of FPGA's until I recently read an article at CNN's
> website <http://www.cnn.com/TECH/computing/9906/15/supercomp.idg/> that
> said a company called Star Bridge Systems, Inc.
> <http://www.starbridgesystems.com> has developed a revolutionary
> computer using FPGA's that sits on a desktop & plugs into a 120v
> standard outlet but outperforms IBM's fastest supercomputer called
> Pacific Blue (which takes up 8,000sq.ft. floor space and uses something
> like 3 Megawatts of power) by many times. They go on to say that this
> company will be selling $1000 desktop computers in about 18 months that
> are THOUSANDS of times faster than a 350PII Intel processor based
> desktop. Here are my questions:
>
> You guys(and gals) have been using FPGA's and programming for them for
> some time now. What do all think of these claims?
>
> Do you think it would be benificial to learn to program in their
> proprietory language(called "Viva")? They will be offering an online
> university for this purpose this fall with initial tutorials being free.
> They claim that their technology is such a breakthrough that it will
> literally replace most known types of AISC's and microprocessors
> quickly.
>
> Let me know what you think.
>
>                             Thanks,
>
>                                     Bob



--
-Ray Andraka, P.E.
President, the Andraka Consulting Group, Inc.
401/884-7930     Fax 401/884-7950
email randraka@ids.net
http://users.ids.net/~randraka


Article: 17031
Subject: Re: DS2 and E2 Framer???
From: "Steve W." <natpress@sprint.ca>
Date: Fri, 25 Jun 1999 23:44:22 -0400
Links: << >>  << T >>  << A >>
Try "Level One"

Steve

Gerry Schneider wrote in message <377068A4.4BE9@sympatico.ca>...
>Stefan Wimmer wrote:
>>
>> Hi everybody,
>>
>> sorry for spreading this message into several newsgroups, but I'm looking
for
>> DS2 and E2 (yes, that's _2_!) framer chips (or FPGA cores) and don't
really
>> know where to start.
>> Search engines didn't come up with something useful, but maybe someone
out
>> there in usenet land has a good tip (besides Transwich)?
>
>You might try the traditional T1/E1 suppliers like Dallas and Crystal
>Semi - they might be moving to second level or know who is. Boy, I
>wonder if a 100 MIPS Scenix processor could do 8 Mbps E2? What a great
>"virtual peripheral" that would make! (That's scenix.com in case you
>want to check).
>
>Good luck with it,
>--
>Gerry
>            @          Change "not_here" to "lsb"
>        ##_/_\__[(
>         <0_0_0>


Article: 17032
Subject: Re: Synopsys FPGA Express vs. Compiler II
From: ems@riverside-machines.com.NOSPAM
Date: Sat, 26 Jun 1999 09:34:03 GMT
Links: << >>  << T >>  << A >>
On Thu, 24 Jun 1999 01:17:41 -0400, Jim Kipps <jkipps@viewlogic.com>
wrote:

> While FPGA Express uses the same language front-end
> and is subset compatible with DC...
> <snip>

this wasn't true in express 3.1 and DC 1998.05, as we discussed a few
weeks ago - are you saying that they do both now use the same
analyser?

evan

Article: 17033
Subject: Re: Virtex data sheet is incomplete
From: ems@riverside-machines.com.NOSPAM
Date: Sat, 26 Jun 1999 10:28:59 GMT
Links: << >>  << T >>  << A >>
On Wed, 23 Jun 1999 15:58:57 GMT, Tom Liehe
<moox@flatland.dimensional.com> wrote:

>I just downloaded the most recent PDF file of the Virtex data
>sheet (dated 5/13/99) and it has a table called "Virtex Clock
>Distribution Guidelines". This table is empty! While I really
>need more than "guidelines" - I need actual max skew data -
>these guidelines would be a start.  Xilinx, when will you be
>filling in this table?

this is a question i asked tech support some months ago - it was due
in the 1.4 datasheet, but it hasn't made it into 1.5 either. however,
i'm not really sure that it's relevant to your problem (it's really
required for pin->pin clock->out delays). the problem with skew is
that it's load-dependent, and so it wouldn't be realistic to put it in
a datasheet. however, the timing analyser will report skews on a
routed design, so you'll be able to get a worst-case skew from your
timing analysis or simulation.

the other part of the problem is the IOB clock->out skew. you won't
get this information so, if you're desperate, you'll have to guess it.
the minimum clock->out is going to be about 30% of the datasheet max,
so this gives you a worst-case. in practice, it should be better,
because this assumes that the two IOBs have temperature/ voltage/
process differences.

if you've really got to control skew, have you considered using a
clock driver chip with a guaranteed skew? you could then lock internal
clocks in the virtex to these external clocks. alternatively, you
could generate up to 4 external clocks from the virtex, using internal
DLLs to lock them to an internal source (but then you still have the
same problem with clock skew on the reference clock to the 4 DLLs, and
skew on the input buffers). 

>There is another table on the same page (page 3-30) called
>"Virtex Clock Distribution Characteristics". I suspect this
>may be useful info but I do not understand what it means.
>It gives numbers for "Global clock PAD to output" and
>"IN input to OUT output".  What are these referring to?
>I know all about Xilinx global clock buffers but it is
>unclear to me what they are trying to say here.  

'GCLK IOB and buffer'
'Global clock PAD to output' = clock pad to output of IBUFG component
'IN input to OUT output' = IBUFG output -> BUFG input -> BUFG output

ie. the sum of the two is the delay from the clock pad to the start of
the GCLK net.

evan

Article: 17034
Subject: Re: fast counter in 4013XL?
From: s_clubb@NOSPAMnetcomuk.co.uk (Stuart Clubb)
Date: Sat, 26 Jun 1999 11:29:48 GMT
Links: << >>  << T >>  << A >>
Firstly to the original poster.

NEVER EVER EVER BELIEVE THE ESTIMATES FROM A SYNTHESIS TOOL.

(sorry for the shouting)

Or at least don't take them as gospel.

I've seen this soooo many times where new users assume that the
synthesis tool that is the most optimistic in timing/area estimates
must be the best. Personally I feel a Synthesis tool should be
pessimistic. Synthesis is usually fairly quick, but P&R when properly
constrained is a considerably larger chunk of time. The tool that sets
your expectations in excess of what the silicon can achieve will only
disappoint later on. But by then, it's usually too late.

>This will generate two levels of logic.  Depending on the placement, you could
>easily get performance even worse than 50MHz in a -09.  If you look at the
>logic required for a loadable counter without the count enable, you'll see
>that all four LUT inputs are used for each bit ( CIN ^ Q ) & (!LD) + (D &
>LD).  The code you list below adds the count enable in the bit's equation
>extending it to a 5 input function.  The synthesizer extends this to a second

Well maybe FPGA "not so" Express and some dodgy schematic bashers
might :-)

>level of logic rather than going to one bit per CLB.  If you place the second
>level immediately adjacent to the counter you should do better, but placement
>in VHDL is not for the faint of heart.

But not necessary in this example.

>Instead of putting the count enable in the logic, you can use the Flip-flop's
>count enable.  Check the xilinx website on the code style required to infer
>it's use in FPGA express.  You'll also have to assert the clock enable when
>the load input is active.  The other alternative is to use the carry in as a
>count enable, although that is slower and can be a pain to code.

All the original posters code *should* create is a "self-enabling"
load. ie, enable and load are OR'd together and fed to the dedicated
enable input of the flip-flops. Fortunately the code sticks with
active high, which the right way around for 4000XL. This means that if
the enable were to be given priority in the code, you can shave a LUT
off the implementation. Speed should be constant though. The load
signal then also feeds the mux-logic that will be merged into the
FLUTs that use the carry chain etc. for the increment. Bingo, one
level of logic in theory.

Synthesising your example circuit to a -09 using Leonardo Spectrum
resulted in an estimate of around under 15 ns for the critical path.
But this was the path for through the input pin to the enable. Just
estimating internal frequency performance gave a path delay of about
12ns. After P&R in a 4013xl-09 this was actually 7.8 ns (128 MHz).
Fast enough I think.

Oh, and there's a paste of a bit of the mapping report and timing
report below.

Cheers
Stuart

Design Information
------------------
Command Line   : map -p xc4013xl-09-pq160 -o map.ncd test.ngd test.pcf

Target Device  : x4013xl
Target Package : pq160
Target Speed   : -09
Mapper Version : xc4000xl -- M1.5.29i
Mapped Date    : Sat Jun 26 11:37:18 1999

Design Summary
--------------
   Number of errors:        0
   Number of warnings:      1
   Number of CLBs:              7 out of   576    1%
      CLB Flip Flops:      12
      CLB Latches:          0
      4 input LUTs:        13
      3 input LUTs:         0
   Number of bonded IOBs:      28 out of   129   21%
      IOB Flops:            0
      IOB Latches:          0
   Number of clock IOB pads:    1 out of    12    8%
   Number of BUFGLSs:           1 out of     8   12%
   Number of RPM macros:        1
   Number of STARTUPs:          1
Total equivalent gate count for design: 227
Additional JTAG gate count for IOBs:    1344

================================================================================
Timing constraint: TS02 = MAXDELAY FROM TIMEGRP "FFS" TO TIMEGRP "FFS"
20 nS  ; 
 78 items analyzed, 0 timing errors detected.
 Maximum delay is   7.755ns.
--------------------------------------------------------------------------------
Slack:    12.245ns path result_dup0(1) to result_dup0(10) relative to
          20.000ns delay constraint

Path result_dup0(1) to result_dup0(10) contains 7 levels of logic:
Path starting from Comp: CLB_R6C23.K (from clk_int)
To                   Delay type         Delay(ns)  Physical Resource
                                                   Logical Resource(s)
-------------------------------------------------  --------
CLB_R6C23.XQ         Tcko                  1.470R  result_dup0(1)
                                                   ix319_ix43
CLB_R6C23.F1         net (fanout=2)        0.975R  result_dup0(0)
CLB_R6C23.COUT       Topcy                 1.600R  result_dup0(1)
                                                   ix319_ix8
CLB_R5C23.CIN        net (fanout=1)        0.236R  ix319_nx2
CLB_R5C23.COUT       Tbyp                  0.140R  result_dup0(2)
                                                   ix319_ix45
CLB_R4C23.CIN        net (fanout=1)        0.236R  ix319_nx43
CLB_R4C23.COUT       Tbyp                  0.140R  result_dup0(4)
                                                   ix319_ix83
CLB_R3C23.CIN        net (fanout=1)        0.236R  ix319_nx80
CLB_R3C23.COUT       Tbyp                  0.140R  result_dup0(6)
                                                   ix319_ix100
CLB_R2C23.CIN        net (fanout=1)        0.236R  ix319_nx98
CLB_R2C23.COUT       Tbyp                  0.140R  result_dup0(8)
                                                   ix319_ix116
CLB_R1C23.CIN        net (fanout=1)        0.236R  ix319_nx114
CLB_R1C23.K          Tsumc+Tick            1.970R  result_dup0(10)
                                                   ix319_ix132
                                                   ix319_ix143
                                                   ix319_ix10
-------------------------------------------------
Total (5.600ns logic, 2.155ns route)       7.755ns (to clk_int)
      (72.2% logic, 27.8% route)

--------------------------------------------------------------------------------

<snip>

Timing summary:
---------------

Timing errors: 0  Score: 0

Constraints cover 150 paths, 0 nets, and 68 connections (98.6%
coverage)

Design statistics:
   Minimum period:   7.755ns (Maximum frequency: 128.949MHz)
   Maximum path delay from/to any node:   7.755ns


Analysis completed Sat Jun 26 11:38:00 1999
--------------------------------------------------------------------------------

For Email remove "NOSPAM" from the address
Article: 17035
Subject: Virtex JTAG readback
From: adamjone@purdue.edu
Date: Sat, 26 Jun 1999 17:00:39 GMT
Links: << >>  << T >>  << A >>
I'm using a Virtex XCV300 and I'm having trouble finding documentation
on how to perform readback with the JTAG/boundary-scan interface.  The
configuration and readback document (xapp138.pdf) references only the
SelectMAP method of readback.  It says to reference xapp139 for
information on readback and configuration with the JTAG.  After
calling tech support, I found that this document has not yet been
written.  I've tried simply entering the CFG_OUT instruction into the
TAP instruction register and then clocking out data, but that doesn't
do it.  Does anyone know what the proper method for readback using the
JTAG on Virtex parts is?
	Also, I have been able to configure the Virtex device after
startup using the JTAG, but I haven't been able to reconfigure the
device after the first configuration.  Is the sequence of commands
different for a second configuration?
Article: 17036
Subject: Major Exemplar Bug
From: "Edward Moore" <edmoore@digitate.freeserve.co.uk>
Date: Sat, 26 Jun 1999 18:12:50 +0100
Links: << >>  << T >>  << A >>
 I've just discovered that you can't instantiate most Xilinx carry chains in
Leonardo Spectrum, because it optimizes away the examine-ci element at the
top of the chain, giving errors in theXilinx M1 mapper.
I guess this has something to do with the examine-ci CY4 macro not having
any outputs.

This is a major bug because Leonardo still makes a mess of some inferred
arithmetic, ie adder-subtractors, loadable adders, etc. (ok, sometimes these
work if the wind is blowing in the right direction). Now I can't infer or
instantiate the correct structure.

Does anyone have any fixes for this,  ie TCL scripts or magic attributes ?.
How does Synplify fare with the instantiated and inferred arithmetic ?.


Article: 17037
Subject: IP Cores for FPGA
From: "Michelle Tran" <icommtek@fpga-design.com>
Date: Sat, 26 Jun 1999 16:03:26 -0400
Links: << >>  << T >>  << A >>
Available IP Cores for FPGA at http://www.fpga-design.com/


--
===========================================================
Michelle Tran                                                     IComm
Technologies, Inc.
icommtek@fpga-design.com                         http://www.fpga-design.com/
===========================================================



Article: 17038
Subject: Re: fast counter in 4013XL?
From: Rickman <spamgoeshere4@yahoo.com>
Date: Sat, 26 Jun 1999 16:04:14 -0400
Links: << >>  << T >>  << A >>
Stuart Clubb wrote:
...snip...
> >Instead of putting the count enable in the logic, you can use the Flip-flop's
> >count enable.  Check the xilinx website on the code style required to infer
> >it's use in FPGA express.  You'll also have to assert the clock enable when
> >the load input is active.  The other alternative is to use the carry in as a
> >count enable, although that is slower and can be a pain to code.
> 
> All the original posters code *should* create is a "self-enabling"
> load. ie, enable and load are OR'd together and fed to the dedicated
> enable input of the flip-flops. Fortunately the code sticks with
> active high, which the right way around for 4000XL. This means that if
> the enable were to be given priority in the code, you can shave a LUT
> off the implementation. Speed should be constant though. The load
> signal then also feeds the mux-logic that will be merged into the
> FLUTs that use the carry chain etc. for the increment. Bingo, one
> level of logic in theory.
> 
> Synthesising your example circuit to a -09 using Leonardo Spectrum
> resulted in an estimate of around under 15 ns for the critical path.
> But this was the path for through the input pin to the enable. Just
> estimating internal frequency performance gave a path delay of about
> 12ns. After P&R in a 4013xl-09 this was actually 7.8 ns (128 MHz).
> Fast enough I think.
> 
> Oh, and there's a paste of a bit of the mapping report and timing
> report below.
> 
> Cheers
> Stuart

I guess I am just old fashioned. I found that it was just a lot easier
to instantiate a Logiblox counter (or one from the library) in place of
synthesis. My code used counters of different sizes and with different
surrounding logic. So I just decided what counter I needed, instantiated
the counter and then used synthesis for the supporting logic. This just
seemed to be so much quicker and easier than trying to get the
synthesizer to generate the circuit I already had in my head. 

One of my counters was just like the counter you needed. So I plopped
down an 8 bit, loadable, enabled up counter and used an OR of the load
enable and the count enable to generate the CE input to the counter.
This ended up being exactly like what Stuart is describing. Of course a
non instantiated design can be ported more easily. So you just need to
decide if you want to spend your time and money up front for
portability, or if you want to reduce your design time for a specific
target. 


-- 

Rick Collins

rick.collins@XYarius.com

remove the XY to email me.



Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com
Article: 17039
Subject: Re: Request for information on discontinued Xilinx XC4000-series variants
From: z80@ds2.com (Peter)
Date: Sun, 27 Jun 1999 10:04:48 GMT
Links: << >>  << T >>  << A >>

>XC4000	        The original version of the family
>XC4000A         Same CLB architecture as XC4000, less routing : cheaper
>XC4000D         Same as XC4000, no CLB RAM : cheaper
>XC4000H         Same CLB architecture as XC4000, twice as many I/O
>XC4000L         Same CLB architecture as XC4000, lower power 

Are these all bitstream compatible with the original XC4k?

I am talking about using my old 1992 Viewlogic 4 + 1996 XACT6.01 for a
new design, for which I would use a small 4k device. The pricing on
the small 4k parts is quite good, 100-off.


--
Peter.

Return address is invalid to help stop junk mail.
E-mail replies to zX80@digiYserve.com but remove the X and the Y.
Please do NOT copy usenet posts to email - it is NOT necessary.
Article: 17040
Subject: Re: fast counter in 4013XL?
From: "Yip" <louiyip@iname.com>
Date: Sun, 27 Jun 1999 20:20:36 +0800
Links: << >>  << T >>  << A >>
I have used FPGA Express to program Altera's device FLEX 6k (6016-3) It
works for a 32-bit counter with Load and direction, up/Down in 50MHz but I
need to run it in MAXPLUS using FAST in the logic option of fitting (and try
harder option). The fitting uses the fast tracks for the counter. Could you
tell me which device you are going to use. Take care on the Fitting with
optimization. If you use Altera's device, pls make sure you open the file
*.acf, not the *.edf. Because the ACF file has some information.

Yiu-Man (Louis/Leslie Yip)
ASM Assembly Automation Ltd.
Hong Kong

Andy Peters wrote in message <7l189t$2ua0$1@noao.edu>...
>Is it reasonable to assume that I can build a 12-bit counter in VHDL using
>FPGA Express and make it run faster than 50 MHz in the -09 part?
>
>given:
>
>    counter : process (clk, reset)
>    begin
>        if reset = '1' then
>            cnt <= (others => '0');
>        elsif clk'event and clk = '1' then
>            if load = '1' then
>                cnt <= initreg;
>            elsif cnten = '1' then
>                cnt <= cnt + '1';
>            end if;
>        end if;
>    end process counter;
>
>load and cnten are both synchronous with the clock.  FPGA Express tells me
>that it's barely over 20 ns for some of it.
>
>Seems sorta silly that this can't work.  Looks like it's time to do it by
>hand or use a logicore adder for the counters.
>
>Time to go home and watch the knicks lose.
>
>-- a
>------------------------------------------
>Andy Peters
>Sr. Electrical Engineer
>National Optical Astronomy Observatories
>950 N Cherry Ave
>Tucson, AZ 85719
>apeters@noao.edu
>
>NY Knicks in '99:
>"Ya gotta believe!"
>
>


Article: 17041
Subject: Re: 100 Billion operations per sec.!
From: "Robert K. Veazey Jr." <rveazey@bellsouth.net>
Date: Sun, 27 Jun 1999 10:48:10 -0400
Links: << >>  << T >>  << A >>
Thanks for your knowledgable response. It is greatly appreciated.

                            Bob

Ray Andraka wrote:

> Marketing fluff, and not much else.  Reconfigurable computing is used today
> by many on commercially available platforms as well as on proprietary
> boards. The hardware is the easy part.  FPGAs routinely provide algorithmic
> performance well beyond that possible with Von Neumann computers (heck, I've
> built my business around that fact).  Right now, there are many very bright
> people working on the problem of translating the algorithms into FPGA
> programs to get the efficiency.  Fact is, it ain't all that easy to do.
> First there is the issue of the algorithm itself, which more often than not
> is not specified in a way that maps efficiently into hardware in general and
> especially into FPGAs.  Second, there are still basic problems with
> automatic placement that very often  prevent automatically generated designs
> from reaching the performance the devices are capable of.  There is plenty
> of room left for experts to work with a design to obtain considerably more
> performance.  Finally, FPGAs do not make a very good general purpose
> computer.  They are well suited for tasks that require the same or very
> similar operation to be performed a large number of times -- video
> processing, communications processing and the like.  They fall short when
> they need to perform a large variety of tasks in a short period of time, as
> there is alot of logic required for the variations.  Reconfiguration allows
> some of that logic to be cached in cheap memory, but current devices take a
> long time to reconfigure compared to the clock cycle the configured device
> can run at.  The result is you wind up with a considerable down time for
> reconfiguration, and if you need to reconfigura alot because of a diverse
> set of tasks, you spend more time configuring than running.
>
> As a price point, you might look at the selling prices of various FPGA
> boards out there.  The cheapest ones are the better part of that $1000 price
> tag, and those usually only have one FPGA on them.  Certainly not enough to
> blow the doors off a Pentium 350 running non-specific random tasks!  The
> general consensus currently seems to be that the computing in FPGAs is best
> as a coprocessor rather than a replacement for the processor.
>
> Robert K. Veazey Jr. wrote:
>
> > I never heard of FPGA's until I recently read an article at CNN's
> > website <http://www.cnn.com/TECH/computing/9906/15/supercomp.idg/> that
> > said a company called Star Bridge Systems, Inc.
> > <http://www.starbridgesystems.com> has developed a revolutionary
> > computer using FPGA's that sits on a desktop & plugs into a 120v
> > standard outlet but outperforms IBM's fastest supercomputer called
> > Pacific Blue (which takes up 8,000sq.ft. floor space and uses something
> > like 3 Megawatts of power) by many times. They go on to say that this
> > company will be selling $1000 desktop computers in about 18 months that
> > are THOUSANDS of times faster than a 350PII Intel processor based
> > desktop. Here are my questions:
> >
> > You guys(and gals) have been using FPGA's and programming for them for
> > some time now. What do all think of these claims?
> >
> > Do you think it would be benificial to learn to program in their
> > proprietory language(called "Viva")? They will be offering an online
> > university for this purpose this fall with initial tutorials being free.
> > They claim that their technology is such a breakthrough that it will
> > literally replace most known types of AISC's and microprocessors
> > quickly.
> >
> > Let me know what you think.
> >
> >                             Thanks,
> >
> >                                     Bob
>
> --
> -Ray Andraka, P.E.
> President, the Andraka Consulting Group, Inc.
> 401/884-7930     Fax 401/884-7950
> email randraka@ids.net
> http://users.ids.net/~randraka

Article: 17042
Subject: Re: Altera: Simulation results differ...
From: "Henning Trispel" <htrispel@lange-electronic.de>
Date: Sun, 27 Jun 1999 18:01:19 +0200
Links: << >>  << T >>  << A >>
Hi Lars,

seems that your logic delay is larger then your clock cycle. Find the
critical path and
try to break to logic into "smaller" pieces - e.g. pipeline the logic if
possible.

You may want to set the compiler option to "Logic Synthesis = FAST", this
may help also
since it reassembles logic terms during compilation to be faster, but less
routable. This
usually works if you have about 50-70% resources left.

You can get an estimate of your timing delays with the time delay matrix.

Hope that helps,

Henning Trispel



Lars FREUND schrieb in Nachricht
<1dtzgdq.1pmjkwoh308esN@ppp-lf.depinfo.enseeiht.fr>...
>Hi,
>
>I have some strange results when compiling my project for a
>FLEX8000-FPGA with Altera Max+Plus II 9.21 (SunOS).
>
>When compiling with the "Functional SNF Extractor" option, my simulation
>results are correct.
>
>When compiling with the "Timing SNF Extractor", all the routing, fitting
>and so on, I get strange results. My LPM_ADD_SUB make nonsense. Data
>stocked in Flipflops and used for counters "forget" the MSB. But not all
>of them, only some of them. That happens even when not using "Carry
>Chain" and so on.
>
>I'll appreciate any help or comment!
>
>
>Greetings,
>
>Lars


Article: 17043
Subject: Re: Request for information on discontinued Xilinx XC4000-series variants
From: fliptron@netcom.com (Philip Freidin)
Date: 27 Jun 1999 20:32:04 GMT
Links: << >>  << T >>  << A >>
Usually you can find device equivalence by looking at the documented
bitstream length.

XC4000A and XC4000H are not bitstream compatible with anything else.
XC4000D and XC4000L are bitstream compatible with either XC4000 or
XC4000E but I cant remember which.

But if you are planning to use "original XC4K", then XACT6.01 knows
how to do these so I don't see why you are asking the question.

Philip



In article <3776ecc7.324514226@news.netcomuk.co.uk>, Peter <z80@ds2.com> wrote:
>
>Are these all bitstream compatible with the original XC4k?
>
>I am talking about using my old 1992 Viewlogic 4 + 1996 XACT6.01 for a
>new design, for which I would use a small 4k device. The pricing on
>the small 4k parts is quite good, 100-off.
>
>>XC4000	        The original version of the family
>>XC4000A         Same CLB architecture as XC4000, less routing : cheaper
>>XC4000D         Same as XC4000, no CLB RAM : cheaper
>>XC4000H         Same CLB architecture as XC4000, twice as many I/O
>>XC4000L         Same CLB architecture as XC4000, lower power 


Article: 17044
Subject: Re: Read/Writes to memories/register files for PIC core
From: tcoonan@mindspring.com (Thomas A. Coonan)
Date: Mon, 28 Jun 1999 00:01:46 GMT
Links: << >>  << T >>  << A >>
Hey Folks,

I've spent some time discussing this issue with Wade Peterson who
has a commercial version of the PIC.  If I might summarize; one way
to do this required read/modify/write operation is to use a type
of synchronous memory that offers SYNCHRONOUS WRITEs
as well as ASYNCHRONOUS READs.  Wade has shown me
an ORCA memory model that does exactly this, and infers that
many of his ASIC customers must also have such a memory.
I, however, do not see such a memory in the models I have access
to at the moment (which are XILINX Vertex, some LSI Logic
ASIC memories and some ST ASIC memories).  So.. would y'all
be so kind as to indicate if you have access to such a memory?
If you sent me the name of the vendor, memory name, etc. that
would be great.  And again, I'm hoping that these memories are "real
memories" and not just flip-flop based register files.  I'll summarize
when I get some.

My holy grail is to offer my freeware PIC that runs at 1 instruction
per clock instead of the current 2 clocks!  

Thanks.

tom coonan
www.mindspring.com/~tcoonan  (see "Synthetic PIC").

>Hi,
>
>I've asked this question once before, but it is important (to me) so
>here goes again.
>
>I have this freeware Verilog PIC CPU core
>(www.mindspring.com/~tcoonan) that I like to play with.  I have a
>nagging issue related to memories and clocks/instruction.  If you know
>about the PIC, then you know that it has seperate program and data
>memories.  My current model uses 2 clocks per instruction because I'm
>using "standard" synchronous memories and I can't seem to figure out
>how to do everything that needs doing in terms of memory cycles with 1
>clock per instruction.  I'd like to offer 1 clock/instruction!  The
>real PIC actually has 4 internal "phases".  wheew..  that was a lot to
>say.  For example, the PIC allows you to execute the following code:
>
>   incf counter1, counter1
>   incf foo, foo
>   incf bar, bar
>
>The above means that the increment instruction fetches the data memory
>locations indicated (these "registers" are really from the internal
>data memory or "register file" that is potentially big, on the order
>of 1024 8-bit words), increments them (in my ALU) and rewrites the new
>value back into the data memory.  Seems like I need 2 cycles, yes?
>
>On the instruction side, I think I can operate on 1 clock per
>instruction by concurrently loading my PC register *AND* the Address
>port of my synchronous ram.  So, the problem appears to be reduced to:
>
>   How can I provide a portable memory to do a read/modify/write in
>   only one cycle?
>
>I don't think a "standard" synchronous memory can do this.  Here are
>some alternatives I know of, but I would like to hear any insights
>others can offer.  Here's my thinking and comments to date:
>
>   1)  First; Ideally, I want to stay in standard Verilog - no custom
>        memory designs or floorplanning can be used.  No
>        non-synchronous multiple clock edge techniques, no
>        latches, can be used  (i'm trying to do something very
>        portable and flexible).
>   2)  Synchronous memories are the convenient, area-efficient cells
>        that people will have access to either in ASICs or in FPGAs,
>        so perhaps, the price to pay for this is, sadly, to 
>        require 2 clocks/instruction.  This is my current status.
>   3)  Use a "register file" and just synthsize the flip-flops.  This
>        *can* offer the read/modify/write feature.  Obviously, this
>        is only practical for a dozen or two data memory locations.
>        yes?  I have always assumed that delcaring something like
>        reg [7:0] data_memory[0:511] and synthesizing, would
>        be crazy, whereas reg[7:0] data_memory[0:31] might be
>        a reasonable thing to do if it buys us 1 clock/instruction.
>
>If I go with #3, then one suggestion that I have heard, is to use
>something like Module Compiler.  I don't like this because of point
>#1.  Has anyone done a register file and come up with RTL techniques
>that lead to decent speed area?  Or, should I simply declare the 2-D
>memory, synthesize it and move on.  OR, am I missing something
>altogether!
>
>That's it.  I hope someone out there has had a similar issue and can
>help me!  (I'll be at DAC next week if anyone would actually like to
>discuss this sort of thing..)
>
>tom coonan
>tcoonan@mindspring.com
>www.mindspring.com/~tcoonan

Article: 17045
Subject: Re: Read/Writes to memories/register files for PIC core
From: Rickman <spamgoeshere4@yahoo.com>
Date: Sun, 27 Jun 1999 22:44:04 -0400
Links: << >>  << T >>  << A >>
"Thomas A. Coonan" wrote:
> 
> Hey Folks,
> 
> I've spent some time discussing this issue with Wade Peterson who
> has a commercial version of the PIC.  If I might summarize; one way
> to do this required read/modify/write operation is to use a type
> of synchronous memory that offers SYNCHRONOUS WRITEs
> as well as ASYNCHRONOUS READs.  Wade has shown me
> an ORCA memory model that does exactly this, and infers that
> many of his ASIC customers must also have such a memory.
> I, however, do not see such a memory in the models I have access
> to at the moment (which are XILINX Vertex, some LSI Logic
> ASIC memories and some ST ASIC memories).  So.. would y'all
> be so kind as to indicate if you have access to such a memory?
> If you sent me the name of the vendor, memory name, etc. that
> would be great.  And again, I'm hoping that these memories are "real
> memories" and not just flip-flop based register files.  I'll summarize
> when I get some.

I can guaranty that the Xilinx Vertex has such a memory. The Xilinx
parts since the XC4000E all have had a dual port synchronous SRAM built
into the LUT of each CLB. This will give you 32 x 1 when used as you
describe above which does not require a dual port memory if you cycle
the address using an external mux. Or you will get 16 x 1 in each CLB if
you want the memory to supply that mux. 

In addition, the Vertex parts supply separate blocks of memory with full
true dual porting. Each port can read or write independantly from the
other. They can even be configured for different data widths, such as 8
bits in on one side and 16 bits out on the other. 

In the Xilinx library the CLB SRAMs are RAM32X1S and RAM16X1D for the
single port synchronous and dual port synchronous memories respectively.
I haven't worked with the Vertex so I don't know the name of the block
ram. 

Where is Peter Alfke when you need him?


-- 

Rick Collins

rick.collins@XYarius.com

remove the XY to email me.



Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design

Arius
4 King Ave
Frederick, MD 21701-3110
301-682-7772 Voice
301-682-7666 FAX

Internet URL http://www.arius.com
Article: 17046
Subject: Re: newbie -- What's the best way to get started?
From: "raravan" <raravan@sprint.ca>
Date: Sun, 27 Jun 1999 21:38:51 -0700
Links: << >>  << T >>  << A >>
As to your C/C++ question, not really, but certain higher
level languages are being developed for hardware compilation
 which are closely related to C, such as Handel-C.  If you are interested in
graphics then consider the Altera UP1 board, it has VGA support and is
fairly cheap.

vrml3d.com wrote in message <7k6teh$b24$1@autumn.news.rcn.net>...
>What's the best way to start working with FPGAs?  Can it be done for approx
>$500 US, or is it impossible to do anything cool without spending alot of
>money?  Does all of the software run on commercial Unix?  Does any of it
run
>on Win98?  If not, Linux would be a bearable alternative.  How difficult is
>it to program FPGAs?  Are there any C/C++ libraries that will allow you to
>say, for example... do something like this:
>



Article: 17047
Subject: .shp, .shx, .dbf file conversion. help !
From: "Nico L." <Lonetti@pitagora.it>
Date: Mon, 28 Jun 1999 10:04:50 +0300
Links: << >>  << T >>  << A >>


--
Hi.
I have a simple question.
I'll be very happy if someone could help me.

I need to codify a .shp file which is part
of a G.I.S. (Geographic Information System)
with a dbase file.

Simply I have:
- file.dbf
- file.shp
- file.shx

In the dbf file there are datas linked to
the .shp and .shx fie.

I can display this shape file and obtain a
related database.
But at the moment I cannot modify the .shp file.

Where can I find an utility which allows to
edit the .shp, .shx files ?

Thanks in advance,
Nico Lonetti . Italy




Article: 17048
Subject: Re: Read/Writes to memories/register files for PIC core
From: Braam <test@azona.com>
Date: Mon, 28 Jun 1999 11:57:03 +0200
Links: << >>  << T >>  << A >>
Hi,

I am a bit late in this thread but why dont you write a memory like that in
Verilog.
It cant be too difficult. Let me know if I should help.


Rickman wrote:

> "Thomas A. Coonan" wrote:
> >
> > Hey Folks,
> >
> > I've spent some time discussing this issue with Wade Peterson who
> > has a commercial version of the PIC.  If I might summarize; one way
> > to do this required read/modify/write operation is to use a type
> > of synchronous memory that offers SYNCHRONOUS WRITEs
> > as well as ASYNCHRONOUS READs.  Wade has shown me
> > an ORCA memory model that does exactly this, and infers that
> > many of his ASIC customers must also have such a memory.
> > I, however, do not see such a memory in the models I have access
> > to at the moment (which are XILINX Vertex, some LSI Logic
> > ASIC memories and some ST ASIC memories).  So.. would y'all
> > be so kind as to indicate if you have access to such a memory?
> > If you sent me the name of the vendor, memory name, etc. that
> > would be great.  And again, I'm hoping that these memories are "real
> > memories" and not just flip-flop based register files.  I'll summarize
> > when I get some.
>
> I can guaranty that the Xilinx Vertex has such a memory. The Xilinx
> parts since the XC4000E all have had a dual port synchronous SRAM built
> into the LUT of each CLB. This will give you 32 x 1 when used as you
> describe above which does not require a dual port memory if you cycle
> the address using an external mux. Or you will get 16 x 1 in each CLB if
> you want the memory to supply that mux.
>
> In addition, the Vertex parts supply separate blocks of memory with full
> true dual porting. Each port can read or write independantly from the
> other. They can even be configured for different data widths, such as 8
> bits in on one side and 16 bits out on the other.
>
> In the Xilinx library the CLB SRAMs are RAM32X1S and RAM16X1D for the
> single port synchronous and dual port synchronous memories respectively.
> I haven't worked with the Vertex so I don't know the name of the block
> ram.
>
> Where is Peter Alfke when you need him?
>
> --
>
> Rick Collins
>
> rick.collins@XYarius.com
>
> remove the XY to email me.
>
> Arius - A Signal Processing Solutions Company
> Specializing in DSP and FPGA design
>
> Arius
> 4 King Ave
> Frederick, MD 21701-3110
> 301-682-7772 Voice
> 301-682-7666 FAX
>
> Internet URL http://www.arius.com

Article: 17049
Subject: Re: Virtex JTAG readback
From: "Albano, David (EXCHANGE:RTP:3H91)" <dalbano@americasm01.nt.com>
Date: Mon, 28 Jun 1999 09:07:21 -0400
Links: << >>  << T >>  << A >>
I think you have to pulse the PRGM_ pin low in order to reconfigure using
the JTAG port once the device is configured once.  I know the PRGM_ pin
is on the xchecker port, but this is what I have been told by my FAE.  It
must be pulsed low for 500nS before reconfiguring using the JTAG port.

David A.

adamjone@purdue.edu wrote:

>
>         Also, I have been able to configure the Virtex device after
> startup using the JTAG, but I haven't been able to reconfigure the
> device after the first configuration.  Is the sequence of commands
> different for a second configuration?



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search