Abstract

article Separable hyperstructure and delayed link binding Share on Author: David F. Brailsford University of Nottingham, School of Computer Science and Information Technology, NOTTINGHAM NG7 2RD, UK University of Nottingham, School of Computer Science and Information Technology, NOTTINGHAM NG7 2RD, UKView Profile Authors Info & Claims ACM Computing SurveysVolume 31Issue 4esDec. 1999 pp 30–eshttps://doi.org/10.1145/345966.346029Online:01 December 1999Publication History 3citation687DownloadsMetricsTotal Citations3Total Downloads687Last 12 Months2Last 6 weeks0 Get Citation AlertsNew Citation Alert added!This alert has been successfully added and will be sent to:You will be notified whenever a record that you have chosen has been cited.To manage your alert preferences, click on the button below.Manage my AlertsNew Citation Alert!Please log in to your account Save to BinderSave to BinderCreate a New BinderNameCancelCreateExport CitationPublisher SiteGet Access

Highlights

  • Research on electronic hypertext systems stretches back for more than 30 years to Engelbart's [Engelbart 1968] and Nelson's [Nelson 1987a] astonishingly far-sighted work

  • It turns out that among all the decisions to be made about the way hyperlinks are implemented, one issue is worthy of analysis and debate: are the links to be tightly bound, as an inseparable part of the material to which they apply, or does the implementation allow a degree of link separability so that links can be held externally in some form of link base? On the face of it this seems to be a simple issue of whether a separate link base can help in maintaining the integrity of large numbers of links, but there is far more to it than that

  • That a huge body of documentation is available on the Web it has evident that HTML is too weak, architecturally, to support a robust, separable, hyperlinking scheme, and yet adopting Standardised Generalised Markup Language (SGML) as the metasyntactic framework for future Web tagsets could lead to enormous parsing problems

Read more

Summary

Background

Research on electronic hypertext systems stretches back for more than 30 years to Engelbart's [Engelbart 1968] and Nelson's [Nelson 1987a] astonishingly far-sighted work. It progressed via dynamic (shareable) linked libraries and remote procedure calls (these latter guaranteeing unique names over a network) to encompass the idea of uniquely named objects that could persist for as long as they were needed It is depressing (but perhaps inevitable) that when computers become an affordable mass-market commodity -- via the IBM PC and its successor clones -- they were forced by reasons of cost, chip availability and a host of nontechnical factors to adopt a hardware and software architecture that was initially shackled to single-user operation and offered an architecture little better than a first-generation computer of the early 1950s. After 25 years of development have modern Pentium chips and Windows NT enabled the PC to attain the sophistication of shareable code and shareable dynamic linked libraries available to users of MULTICS in 1972 This story of advance, retrenchment and regrouping applies with equal force to hypertextual architectures. Systems which supported link separation were available more than 15 years ago but today's World Wide Web has a document architecture, in HTML, which is an uneasy combination of structural and layout features and where every document has to have its hyperlinks hard-coded within itself

Hypertext architectures
To embed or not to embed?
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call