Quote:
Originally Posted by TheDoc
That idea is correct, they do read X amount of data - including html/css, but I think it ignores JS and header stuff.
That's the basic idea of css over basic html, less bytes used so more text and links, menus, ect are looked at vs. trash that does nothing.
However, by using a mixture of tables and css, you can keep the byte size just as small and sometimes smaller, than going pure css and pretty much always going pure tables.
The confusion is tables over css is better for reading, space, bw, whatever - is just wrong. That is all up to the designer/creator.
|
Quote:
Originally Posted by AlienQ
No there is no validity to this argument, I read about that shit myself because I found it hard to believe. The fact is the Spiders will crawl in the site searching for content and they do not read tabled data or CSS as "Character's" for indexing. This argument is a wide spread claim that is completely 100% false.
THis claim spread out from none other than the CSS developers themselves and it holds not an ounce of truth.
Spiders are made to parse out layout elements and absorb actual content. Table information and CSS information style elements are all ignored.
Though some SE's do read Image Alt tags as content it is separated from the content formulation of a page separatly.
|
omg how did i know that i was gonna get a "yes" and a "no"...
Paging: WiredGuy
as far as what i "believe".... it makes sense to me that even if a SE knows how to "disregard" something like table tags, it still has to process that info (it is a computer after all). so there has to be a certain limit that it will get to your page to allow for "fair" indexing across the internet. otherwise everyone would make SE pages with ungodly amounts of characters.