You can subscribe to this list here.
| 2005 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(2) |
Jun
(6) |
Jul
(1) |
Aug
(1) |
Sep
(14) |
Oct
(14) |
Nov
(7) |
Dec
(3) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
|
Nov
|
Dec
|
| 2007 |
Jan
|
Feb
(7) |
Mar
(51) |
Apr
(35) |
May
(8) |
Jun
(2) |
Jul
(4) |
Aug
(1) |
Sep
(2) |
Oct
(11) |
Nov
|
Dec
(4) |
| 2008 |
Jan
(8) |
Feb
(2) |
Mar
(5) |
Apr
(4) |
May
(6) |
Jun
(6) |
Jul
(3) |
Aug
(18) |
Sep
(22) |
Oct
(17) |
Nov
(4) |
Dec
(6) |
| 2009 |
Jan
(1) |
Feb
(9) |
Mar
(12) |
Apr
(11) |
May
(11) |
Jun
(6) |
Jul
(3) |
Aug
|
Sep
(20) |
Oct
(6) |
Nov
(15) |
Dec
(10) |
| 2010 |
Jan
(16) |
Feb
(20) |
Mar
(10) |
Apr
(2) |
May
(2) |
Jun
|
Jul
(2) |
Aug
|
Sep
(2) |
Oct
(1) |
Nov
|
Dec
(2) |
| 2011 |
Jan
|
Feb
(6) |
Mar
(4) |
Apr
|
May
(8) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(1) |
Dec
(2) |
| 2012 |
Jan
(1) |
Feb
|
Mar
(6) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2013 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2014 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
|
From: Greg K. <gke...@gm...> - 2014-11-03 19:03:25
|
Hello; This is somewhat off topic but the Commonwealth Braille & Talking Book Cooperative is reaching out to Francophone Braille and talking book institutions. If you are associated with such an organization or know those who are would you please contact us at in...@cb... Thank you. Ceci est un peu hors sujet mais le Commonwealth braille et Talking Book coopérative tend la main aux francophones braille et parler institutions de livres. Si vous êtes associé à une telle organisation ou savez ceux qui sont seriez-vous se il vous plaît contactez-nous au in...@cb... Merci Commonwealth Braille & Talking Book Cooperative Greg Kearney, General Manager 605 Robson Street, Suite 850 Vancouver BC V6B 5J3 CANADA Email: in...@cb... U.S. Address 21908 Almaden Av. Cupertino, CA 95014 UNITED STATES Email: gke...@gm... |
|
From: Greg K. <gke...@gm...> - 2014-06-24 16:05:21
|
Here is the file I was trying to conert it worked fin the GUI version of Pipeline |
|
From: Romain D. <rde...@gm...> - 2014-06-24 07:16:53
|
Can you provide a sample input so that we can try to reproduce ? Thanks, Romain. PS: the Google Group 'dai...@go...' is for the Pipeline 2 discussion only. For Pipeline 1, please use the DAISY forums or the mailing list 'dai...@li...' On 23 juin 2014, at 23:52, Greg Kearney <gke...@gm...> wrote: > Hello I am getting the following error when running CLI Pipeline 1: > > gkearney$ /Users/gkearney/Sync/Braille/pipeline-20111215/pipeline.sh /Users/gkearney/Sync/Braille/pipeline-20111215/scripts/create_distribute/pef/BrailleTextToPEF.taskScript --input=/Users/gkearney/Desktop/BANA2UEB.brf --mode=org_daisy.BrailleEditorsTableProvider.TableType.BRF --output=/Users/gkearney/Desktop/test.pef > [DEBUG, Pipeline Core] Loading Transformer Text to PEF (class org_pef_text.text2pef.Text2PEF) > [DEBUG, Pipeline Core] Transformer loaded from file:/Users/gkearney/Sync/Braille/pipeline-20111215/transformers/ > [DEBUG, Pipeline Core] Loading Transformer Validator (class int_daisy_validator.ValidatorDriver) > [DEBUG, Pipeline Core] Transformer loaded from file:/Users/gkearney/Sync/Braille/pipeline-20111215/transformers/ > [STATE] Task Text to PEF just started > [DEBUG, ???] Job Parameters: > - input:/Users/gkearney/Desktop/BANA2UEB.brf > - mode:org_daisy.BrailleEditorsTableProvider.TableType.BRF > - output:/Users/gkearney/Desktop/test.pef > - title: > - author: > - identifier: > - language: > > [STATE] Transformer Text to PEF just started > [ERROR, Pipeline Core] Unexpected Error. > org.daisy.pipeline.exception.JobFailedException: Error running script: Unexpected Error.. > at org.daisy.pipeline.core.script.Runner.execute(Runner.java:124) > at org.daisy.pipeline.core.PipelineCore.execute(PipelineCore.java:280) > at org.daisy.pipeline.ui.CommandLineUI.main(CommandLineUI.java:213) > java.lang.NullPointerException > at org.daisy.braille.pef.TextHandler.<init>(TextHandler.java:198) > at org.daisy.braille.pef.TextHandler.<init>(TextHandler.java:39) > at org.daisy.braille.pef.TextHandler$Builder.build(TextHandler.java:162) > at org_pef_text.text2pef.Text2PEF.execute(Text2PEF.java:64) > at org.daisy.pipeline.core.transformer.Transformer.executeWrapper(Transformer.java:174) > at org.daisy.pipeline.core.transformer.TransformerHandler.run(TransformerHandler.java:124) > at org.daisy.pipeline.core.script.Runner.execute(Runner.java:98) > at org.daisy.pipeline.core.PipelineCore.execute(PipelineCore.java:280) > at org.daisy.pipeline.ui.CommandLineUI.main(CommandLineUI.java:213) > gkearney:~ gkearney$ > > > This same file converts without issue in the GUI version. > Can anyone offer some help here? > > > Commonwealth Braille & Talking Book Cooperative > Greg Kearney, General Manager > 605 Robson Street, Suite 850 > Vancouver BC V6B 5J3 > CANADA > Email: in...@cb... > > U.S. Address > 21908 Almaden Av. > Cupertino, CA 95014 > UNITED STATES > Email: gke...@gm... > > > > -- > You received this message because you are subscribed to the Google Groups "DAISY Pipeline Developers" group. > To unsubscribe from this group and stop receiving emails from it, send an email to dai...@go.... > For more options, visit https://groups.google.com/d/optout. |
|
From: Christian E. <chr...@sb...> - 2014-01-09 17:02:46
|
Hi all I created an extra build file to generate a debian package for the DAISY pipeline. The build file imports the build-core.xml and uses a control file which I placed in debian/control. The generated deb installs under /usr/lib/daisy-pipeline except for the documentation which goes under /usr/share/doc. I would like to commit these files. They should not interfere with any of the existing build files. Can I commit? Thanks Christian -- Christian Egli Swiss Library for the Blind, Visually Impaired and Print Disabled Grubenstrasse 12, CH-8045 Zürich, Switzerland ----- SBS Leser, die neue Hoerbuch-App fuer iPhone und iPad. Mehr Infos unter http://online.sbs.ch |
|
From: Romain D. <rde...@gm...> - 2013-05-10 13:56:40
|
Dear All,
As you may have heard, SourceForge has been progressively upgrading hosted projects to a new developer platform, codenamed "Allura".
Our Pipeline projects ("daisymfc" [1] and "daisymfcgui" [2]) have now been upgraded. Source code repositories have been relocated, which mean you should do a fresh checkout or change SVN to point to the new repo [3] (see the new URLs in the forwarded email below).
Mailing lists and other features should work as before.
Best regards,
Romain.
PS: in the end I will *not* migrate the project to GitHub as proposed a few months ago; people at TPB – one of the top contributor as you all know – are using the SVN repos from time to time and they are (legitimately) concerned about the learning cost of switching to Git.
[1] https://sourceforge.net/projects/daisymfc/
[2] https://sourceforge.net/projects/daisymfcgui/
[3] https://sourceforge.net/p/forge/community-docs/Repository%20Upgrade%20FAQ/#how-do-i-change-svn-to-point-to-the-new-repo
Begin forwarded message:
> From: SourceForge.net <nor...@in...>
> Subject: SourceForge Project Upgrade - Code Repo Complete
> Date: 10 mai 2013 15:41:27 UTC+02:00
> To: no...@in...
> Reply-To: no...@in...
>
> Your code repository in upgraded project daisymfc is now ready for use.
>
> Old repository url: http://daisymfc.svn.sourceforge.net/svnroot/daisymfc
>
> New repository checkout command: svn checkout --username=romaindeltour svn+ssh://romaindeltour@svn.code.sf.net/p/daisymfc/code/ daisymfc-code
>
> You should do a checkout using the new repository location. The old repository is read-only now.
>
> For more detailed instructions on migrating to your new repo, please see https://sourceforge.net/p/forge/community-docs/Repository%20Upgrade%20FAQ/
>
> --
> SourceForge.net has sent this mailing to you as a registered user of
> the SourceForge.net site to convey important information regarding
> your SourceForge.net account or your use of SourceForge.net services.
> If you have concerns about this mailing please contact our Support
> team per: http://sourceforge.net/support
|
|
From: Alex B. <ale...@fr...> - 2013-01-14 11:12:09
|
Hello, On Mon, Jan 14, 2013 at 11:16:17AM +0100, Romain Deltour wrote: [...] > I was thinking it could be an opportunity to migrate our source code to GitHub, which we think is beneficial for several reasons: > > - it would be easier for organizations to maintain their own forks or customizations of the Pipeline1 code base > - all the Pipeline-related DAISY projects would be located in one single place [1] > > We would keep on using SF for the mailing lists, and probably file releases too. > > What do you think ? Would it break someone's workflow ? I agree with the two arguments you mentioned above. And I prefer the GitHub interface, so I will be happy if you migrate the Pipeline source code :) Kind regards, Alex |
|
From: Romain D. <rde...@gm...> - 2013-01-14 10:16:32
|
Dear all, SourceForge is upgrading to a new platform and as a result they'll be retiring the classic SourceForge platform, which out project is still using. We're asked to migrate before the end of Q1 2013. I was thinking it could be an opportunity to migrate our source code to GitHub, which we think is beneficial for several reasons: - it would be easier for organizations to maintain their own forks or customizations of the Pipeline1 code base - all the Pipeline-related DAISY projects would be located in one single place [1] We would keep on using SF for the mailing lists, and probably file releases too. What do you think ? Would it break someone's workflow ? Feel free to send me both positive and negative feedback on the topic; you can use the mailing list or private emails to me. We'll decide whether to migrate to GitHub depending on your feedback. Best regards, Romain. [1] https://github.com/daisy-consortium/ |
|
From: Romain D. <rde...@gm...> - 2012-03-19 15:12:00
|
Hi Christian and all, The DOCTYPE declaration was not copied through when word detection was enabled. Attached is a patch (for the GUI) that should fix the issue. I also committed the fix in version 2685. The responsible code was in XMLWordDetector, line 140. The DOCTYPE event is only copied through if a so-called "expAttribute" declared in the grammar config file is null or in the default namespace. The config file used for DTBook declared such an attribute in the http://www.tpb.se/vnml namespace, so the DOCTYPE was not copied through (causing invalidity of the resulting DTBook). I didn't find what this configuration entry was for, nor why the code looked at the namespace of this attribute to enable/disable the DOCTYPE copy, so for now I simply uncommented the test to copy the DOCTYPE through. I ran some tests with the input files I have and all seems to work well. If ever it breaks someone's workflow (for example at TPB), please chime in. Romain. |
|
From: Romain D. <rde...@gm...> - 2012-03-19 09:56:14
|
For the record the issue is being discussed in the forums: http://www.daisy.org/forums/17887 Romain. On 17 mars 2012, at 22:42, Mikołaj Rotnicki wrote: > Hello, > > Daisy Pipeline 20111215 for Mac OS X. > I have Mac OS X 10.7 Lion and several TTS voices that are included > with the system. > I managed to configure TTS voices in ttsbuilder.xml file. > > In general when using th TTS Narrator job to create full DAISY DTB > everything works fine - the book is created correctly. > TTS voice is selected/matched correctly by DAISY Pipeline > > But when the sentence in a book contains any diacritical signs (for > example Polish ą ę ć ś ź, German ä, ö, French: é,â... etc.) regardless > of language - the job returns an error >> The TTS failed to speak > "[the sentence with the diacritic signs]" <<. Although the mp3 files > are generated, the sentences tht contain diacritical sign is silent in > the audio file. > > I also came across this post: http://www.daisy.org/forums/167 but it > seams that in my case it is not tecccesaryly that issu. As the mp3 are > generated correctly when the book/sentence does not contain > diacritical signs. > > How could I solve this problem? > > --- > Mikołaj Rotnicki > > ------------------------------------------------------------------------------ > This SF email is sponsosred by: > Try Windows Azure free for 90 days Click Here > http://p.sf.net/sfu/sfd2d-msazure > _______________________________________________ > Daisymfc-developer mailing list > Dai...@li... > https://lists.sourceforge.net/lists/listinfo/daisymfc-developer |
|
From: Mikołaj R. <rot...@gm...> - 2012-03-17 21:42:49
|
Hello, Daisy Pipeline 20111215 for Mac OS X. I have Mac OS X 10.7 Lion and several TTS voices that are included with the system. I managed to configure TTS voices in ttsbuilder.xml file. In general when using th TTS Narrator job to create full DAISY DTB everything works fine - the book is created correctly. TTS voice is selected/matched correctly by DAISY Pipeline But when the sentence in a book contains any diacritical signs (for example Polish ą ę ć ś ź, German ä, ö, French: é,â... etc.) regardless of language - the job returns an error >> The TTS failed to speak "[the sentence with the diacritic signs]" <<. Although the mp3 files are generated, the sentences tht contain diacritical sign is silent in the audio file. I also came across this post: http://www.daisy.org/forums/167 but it seams that in my case it is not tecccesaryly that issu. As the mp3 are generated correctly when the book/sentence does not contain diacritical signs. How could I solve this problem? --- Mikołaj Rotnicki |
|
From: Romain D. <rde...@gm...> - 2012-03-16 02:10:03
|
Hi Christian, FYI I've added the issue to my TODO list and will have a look early next week. I'll keep you posted. Romain. On 14 mars 2012, at 11:59, Christian Egli wrote: > Hi all > > I wanted to enable word detection in the textOnlyDtbCreator task script. > So I added this parameter to the task script and invoke the > se_tpb_xmldetection transformer with the corresponding value for the > 'doWordDetection' parameter. All seems to work fine, the sentence and > the word detection seems to take place and the text-only book creator > seems to do its job. However when the validator tries to validate the > book it fails with > > [ERROR, Validator] An error occurred while validating the fileset: Cannot determine version of tmp.xml; missing doctype > > Any ideas what is going on here? > > The patched task script, the dtbook file and the shell output are > attached. > > <DTBookToDaisy3TextOnlyDTB.taskScript><tmp.xml>~/tmp/pipeline/pipeline.sh ~/tmp/pipeline/scripts/create_distribute/dtb/DTBookToDaisy3TextOnlyDTB.taskScript --doWordDetection=true --input=tmp.xml --outputPath=`basename test_datei_TextOnly_WordDetection.zip .zip` > [DEBUG, Pipeline Core] Loading Transformer DTBook Fix (class se_tpb_dtbookFix.DTBookFix) > [DEBUG, Pipeline Core] Transformer loaded from file:/home/eglic/tmp/pipeline-20120313/transformers/ > [DEBUG, Pipeline Core] Loading Transformer Validator (class int_daisy_validator.ValidatorDriver) > [DEBUG, Pipeline Core] Transformer loaded from file:/home/eglic/tmp/pipeline-20120313/transformers/ > [DEBUG, Pipeline Core] Loading Transformer XML Detection (class se_tpb_xmldetection.XMLDetection) > [DEBUG, Pipeline Core] Transformer loaded from file:/home/eglic/tmp/pipeline-20120313/transformers/ > [DEBUG, Pipeline Core] Loading Transformer Mixed Content Normalizer (class int_daisy_mixedContentNormalizer.MixedContentNormalizer) > [DEBUG, Pipeline Core] Transformer loaded from file:/home/eglic/tmp/pipeline-20120313/transformers/ > [DEBUG, Pipeline Core] Loading Transformer Z39.86-2005 text-only book creator (beta) (class us_rfbd_textOnlyDtbCreator.TextOnlyDtbCreator) > [DEBUG, Pipeline Core] Transformer loaded from file:/home/eglic/tmp/pipeline-20120313/transformers/ > [DEBUG, Pipeline Core] Loading Transformer System cleanup (class pipeline_system_deleter.Deleter) > [DEBUG, Pipeline Core] Transformer loaded from file:/home/eglic/tmp/pipeline-20120313/transformers/ > [STATE] Task DAISY XML (DTBook) to DAISY 3 Text-Only DTB [BETA] just started > [DEBUG, ???] Job Parameters: > - input:tmp.xml > - outputPath:test_datei_TextOnly_WordDetection > - dtbookFix:REPAIR_TIDY_NARRATOR > - doWordDetection:true > > [STATE] Transformer DTBook Fix just started > [INFO_FINER, DTBook Fix] Validating /tmp/temp3077907975376145371.tmp... > [INFO_FINER, DTBook Fix] Document was valid. > [INFO_FINER, DTBook Fix] Running Repair Category... > [INFO_FINER, DTBook Fix] Running Level normalizer... > [INFO_FINER, DTBook Fix] Running Level splitter... > [INFO_FINER, DTBook Fix] Running Level repairer... > [INFO_FINER, DTBook Fix] Running Illegal heading removal... > [INFO_FINER, DTBook Fix] Running Flatten redundant nesting... > [INFO_FINER, DTBook Fix] Running Complete structure... > [INFO_FINER, DTBook Fix] Running List repair... > [INFO_FINER, DTBook Fix] Running IDREF repair... > [INFO_FINER, DTBook Fix] Running Remove invalid empty elements... > [INFO_FINER, DTBook Fix] Running Pagenum page attribute repair... > [INFO_FINER, DTBook Fix] Running Metadata repair... > [INFO_FINER, DTBook Fix] Running Empty MathML stripper... > [INFO_FINER, DTBook Fix] Running Invalid URI repair... > [INFO_FINER, DTBook Fix] Validating /tmp/temp3077907975376145371.tmp... > [INFO_FINER, DTBook Fix] Document was valid. > [INFO_FINER, DTBook Fix] Running Tidy Category... > [INFO_FINER, DTBook Fix] Running Remove empty elements... > [INFO_FINER, DTBook Fix] Running Move pagenum... > [INFO_FINER, DTBook Fix] Running Pagenum page attribute tidy... > [INFO_FINER, DTBook Fix] Running Change inline pagenum to block... > [INFO_FINER, DTBook Fix] Running Author and Title addition... > [INFO_FINER, DTBook Fix] Running Add xml:lang... > [INFO_FINER, DTBook Fix] Running Indenter... > [INFO_FINER, DTBook Fix] Validating /tmp/temp3077907975376145371.tmp... > [INFO_FINER, DTBook Fix] Document was valid. > [INFO_FINER, DTBook Fix] Running Narrator Category... > [INFO_FINER, DTBook Fix] Running Narrator required metadata... > [INFO, DTBook Fix] XSLT message: Removing dc:Subject lacking content > [INFO, DTBook Fix] XSLT message: Removing dc:Description lacking content > [INFO_FINER, DTBook Fix] Running Narrator required headings (rule 14)... > [INFO, DTBook Fix] XSLT message: Added a dummy h1 > [INFO_FINER, DTBook Fix] Running Narrator required headings (rule 100)... > [INFO_FINER, DTBook Fix] Running Narrator title... > [INFO_FINER, DTBook Fix] Running Narrator lists or dl in p... > [INFO_FINER, DTBook Fix] Validating /tmp/temp3077907975376145371.tmp... > [INFO_FINER, DTBook Fix] Document was valid. > [INFO_FINER, DTBook Fix] Running Indenter... > [INFO_FINER, DTBook Fix] Running Indenter... > [STATE] Transformer DTBook Fix just stopped > [STATE] Transformer Validator just started > [INFO_FINER, Validator] Validating a Dtbook document. > [INFO_FINER, Validator] Proceeding with (inline and/or external) schema validation with 1 schemas. > [INFO_FINER, Validator] Validating using the Schematron Schema /dtbook-2005-narrator.sch. > [INFO_FINER, Validator] Completed full validation of input fileset. > [INFO_FINER, Validator] Completed validation using inline and/or inparameter Schemas. > [INFO, Validator] No errors or warnings reported. Congratulations! > [STATE] Transformer Validator just stopped > [STATE] Transformer XML Detection just started > [INFO_FINER, XML Detection] Using input file test_datei_TextOnly_WordDetection/temp/dtbookFix.xml > [INFO_FINER, XML Detection] Using output file test_datei_TextOnly_WordDetection/temp/dtbookSentence.xml > [INFO_FINER, XML Detection] Starting sentence detection > [INFO_FINER, XML Detection] Temp sent: /tmp/temp4460849038424714215.tmp > [INFO_FINER, XML Detection] Sentence detection finished > [INFO_FINER, XML Detection] Starting word detection > [DEBUG, XML Detection] Temp word: /tmp/temp9222268765521892547.tmp > [INFO_FINER, XML Detection] Word detection finished > [INFO_FINER, XML Detection] Copying to final destination > [INFO_FINER, XML Detection] Copying referred files > [STATE] Transformer XML Detection just stopped > [STATE] Transformer Mixed Content Normalizer just started > [INFO_FINER, Mixed Content Normalizer] Input document has 458 elements > [INFO_FINER, Mixed Content Normalizer] 11 wrapper element inserts were done to input document during normalization. > [INFO_FINER, Mixed Content Normalizer] 233 synchronization points were located and marked in input document. > [STATE] Transformer Mixed Content Normalizer just stopped > [STATE] Transformer Z39.86-2005 text-only book creator (beta) just started > [INFO, Z39.86-2005 text-only book creator (beta)] Input dtbook file = test_datei_TextOnly_WordDetection/temp/tmp.xml > [INFO, Z39.86-2005 text-only book creator (beta)] Output path = test_datei_TextOnly_WordDetection > [INFO, Z39.86-2005 text-only book creator (beta)] Resource file = /home/eglic/tmp/pipeline-20120313/transformers/us_rfbd_textOnlyDtbCreator/toResource.res > [INFO, Z39.86-2005 text-only book creator (beta)] Configuration file = /home/eglic/tmp/pipeline-20120313/transformers/us_rfbd_textOnlyDtbCreator/toConfig.xml > [INFO, Z39.86-2005 text-only book creator (beta)] Reading config file > [INFO, Z39.86-2005 text-only book creator (beta)] Filtering input file to collect data, and add @id's and @smilRef's > [INFO, Z39.86-2005 text-only book creator (beta)] Handling SMIL links > [INFO, Z39.86-2005 text-only book creator (beta)] Generating SMIL files > [INFO, Z39.86-2005 text-only book creator (beta)] Generating NCX > [INFO, Z39.86-2005 text-only book creator (beta)] Copying files to output > [INFO, Z39.86-2005 text-only book creator (beta)] Generating OPF > [STATE] Transformer Z39.86-2005 text-only book creator (beta) just stopped > [STATE] Transformer Validator just started > [INFO_FINER, Validator] Validating a Z3986 DTB. > [INFO, Validator] Validating with ZedVal version 2.1 > [ERROR, Validator] An error occurred while validating the fileset: Cannot determine version of tmp.xml; missing doctype > . > [ERROR, Pipeline Core] Exceptions occurred during validation process. Transformer aborting. > org.daisy.pipeline.exception.JobFailedException: Exceptions occurred during validation process. Transformer aborting. > at org.daisy.pipeline.core.script.Runner.execute(Runner.java:116) > at org.daisy.pipeline.core.PipelineCore.execute(PipelineCore.java:280) > at org.daisy.pipeline.ui.CommandLineUI.main(CommandLineUI.java:213) > org.daisy.pipeline.exception.TransformerRunException: Exceptions occurred during validation process. Transformer aborting. > at int_daisy_validator.ValidatorDriver.execute(ValidatorDriver.java:318) > at org.daisy.pipeline.core.transformer.Transformer.executeWrapper(Transformer.java:174) > at org.daisy.pipeline.core.transformer.TransformerHandler.run(TransformerHandler.java:124) > at org.daisy.pipeline.core.script.Runner.execute(Runner.java:98) > at org.daisy.pipeline.core.PipelineCore.execute(PipelineCore.java:280) > at org.daisy.pipeline.ui.CommandLineUI.main(CommandLineUI.java:213) > ~/tmp/pipeline/pipeline.sh ~/tmp/pipeline/scripts/create_distribute/dtb/DTBookToDaisy3TextOnlyDTB.taskScript --doSentenceDetection=false --doWordDetection=true --input=tmp.xml --outputPath=`basename test_datei_TextOnly_WordDetection.zip .zip` > [DEBUG, Pipeline Core] Loading Transformer DTBook Fix (class se_tpb_dtbookFix.DTBookFix) > [DEBUG, Pipeline Core] Transformer loaded from file:/home/eglic/tmp/pipeline-20120314/transformers/ > [DEBUG, Pipeline Core] Loading Transformer Validator (class int_daisy_validator.ValidatorDriver) > [DEBUG, Pipeline Core] Transformer loaded from file:/home/eglic/tmp/pipeline-20120314/transformers/ > [DEBUG, Pipeline Core] Loading Transformer XML Detection (class se_tpb_xmldetection.XMLDetection) > [DEBUG, Pipeline Core] Transformer loaded from file:/home/eglic/tmp/pipeline-20120314/transformers/ > [DEBUG, Pipeline Core] Loading Transformer Mixed Content Normalizer (class int_daisy_mixedContentNormalizer.MixedContentNormalizer) > [DEBUG, Pipeline Core] Transformer loaded from file:/home/eglic/tmp/pipeline-20120314/transformers/ > [DEBUG, Pipeline Core] Loading Transformer Z39.86-2005 text-only book creator (beta) (class us_rfbd_textOnlyDtbCreator.TextOnlyDtbCreator) > [DEBUG, Pipeline Core] Transformer loaded from file:/home/eglic/tmp/pipeline-20120314/transformers/ > [DEBUG, Pipeline Core] Loading Transformer System cleanup (class pipeline_system_deleter.Deleter) > [DEBUG, Pipeline Core] Transformer loaded from file:/home/eglic/tmp/pipeline-20120314/transformers/ > [STATE] Task DAISY XML (DTBook) to DAISY 3 Text-Only DTB [BETA] just started > [DEBUG, ???] Job Parameters: > - input:tmp.xml > - outputPath:test_datei_TextOnly_WordDetection > - dtbookFix:REPAIR_TIDY_NARRATOR > - doSentenceDetection:false > - doWordDetection:true > > [STATE] Transformer DTBook Fix just started > [INFO_FINER, DTBook Fix] Validating /tmp/temp1089616047982658169.tmp... > [INFO_FINER, DTBook Fix] Document was valid. > [INFO_FINER, DTBook Fix] Running Repair Category... > [INFO_FINER, DTBook Fix] Running Level normalizer... > [INFO_FINER, DTBook Fix] Running Level splitter... > [INFO_FINER, DTBook Fix] Running Level repairer... > [INFO_FINER, DTBook Fix] Running Illegal heading removal... > [INFO_FINER, DTBook Fix] Running Flatten redundant nesting... > [INFO_FINER, DTBook Fix] Running Complete structure... > [INFO_FINER, DTBook Fix] Running List repair... > [INFO_FINER, DTBook Fix] Running IDREF repair... > [INFO_FINER, DTBook Fix] Running Remove invalid empty elements... > [INFO_FINER, DTBook Fix] Running Pagenum page attribute repair... > [INFO_FINER, DTBook Fix] Running Metadata repair... > [INFO_FINER, DTBook Fix] Running Empty MathML stripper... > [INFO_FINER, DTBook Fix] Running Invalid URI repair... > [INFO_FINER, DTBook Fix] Validating /tmp/temp1089616047982658169.tmp... > [INFO_FINER, DTBook Fix] Document was valid. > [INFO_FINER, DTBook Fix] Running Tidy Category... > [INFO_FINER, DTBook Fix] Running Remove empty elements... > [INFO_FINER, DTBook Fix] Running Move pagenum... > [INFO_FINER, DTBook Fix] Running Pagenum page attribute tidy... > [INFO_FINER, DTBook Fix] Running Change inline pagenum to block... > [INFO_FINER, DTBook Fix] Running Author and Title addition... > [INFO_FINER, DTBook Fix] Running Add xml:lang... > [INFO_FINER, DTBook Fix] Running Indenter... > [INFO_FINER, DTBook Fix] Validating /tmp/temp1089616047982658169.tmp... > [INFO_FINER, DTBook Fix] Document was valid. > [INFO_FINER, DTBook Fix] Running Narrator Category... > [INFO_FINER, DTBook Fix] Running Narrator required metadata... > [INFO, DTBook Fix] XSLT message: Removing dc:Subject lacking content > [INFO, DTBook Fix] XSLT message: Removing dc:Description lacking content > [INFO_FINER, DTBook Fix] Running Narrator required headings (rule 14)... > [INFO, DTBook Fix] XSLT message: Added a dummy h1 > [INFO_FINER, DTBook Fix] Running Narrator required headings (rule 100)... > [INFO_FINER, DTBook Fix] Running Narrator title... > [INFO_FINER, DTBook Fix] Running Narrator lists or dl in p... > [INFO_FINER, DTBook Fix] Validating /tmp/temp1089616047982658169.tmp... > [INFO_FINER, DTBook Fix] Document was valid. > [INFO_FINER, DTBook Fix] Running Indenter... > [INFO_FINER, DTBook Fix] Running Indenter... > [STATE] Transformer DTBook Fix just stopped > [STATE] Transformer Validator just started > [INFO_FINER, Validator] Validating a Dtbook document. > [INFO_FINER, Validator] Proceeding with (inline and/or external) schema validation with 1 schemas. > [INFO_FINER, Validator] Validating using the Schematron Schema /dtbook-2005-narrator.sch. > [INFO_FINER, Validator] Completed full validation of input fileset. > [INFO_FINER, Validator] Completed validation using inline and/or inparameter Schemas. > [INFO, Validator] No errors or warnings reported. Congratulations! > [STATE] Transformer Validator just stopped > [STATE] Transformer XML Detection just started > [INFO_FINER, XML Detection] Using input file test_datei_TextOnly_WordDetection/temp/dtbookFix.xml > [INFO_FINER, XML Detection] Using output file test_datei_TextOnly_WordDetection/temp/dtbookSentence.xml > [INFO_FINER, XML Detection] Starting word detection > [DEBUG, XML Detection] Temp word: /tmp/temp3488198217531512971.tmp > [INFO_FINER, XML Detection] Word detection finished > [INFO_FINER, XML Detection] Copying to final destination > [INFO_FINER, XML Detection] Copying referred files > [STATE] Transformer XML Detection just stopped > [STATE] Transformer Mixed Content Normalizer just started > [INFO_FINER, Mixed Content Normalizer] Input document has 389 elements > [INFO_FINER, Mixed Content Normalizer] 5 wrapper element inserts were done to input document during normalization. > [INFO_FINER, Mixed Content Normalizer] 227 synchronization points were located and marked in input document. > [STATE] Transformer Mixed Content Normalizer just stopped > [STATE] Transformer Z39.86-2005 text-only book creator (beta) just started > [INFO, Z39.86-2005 text-only book creator (beta)] Input dtbook file = test_datei_TextOnly_WordDetection/temp/tmp.xml > [INFO, Z39.86-2005 text-only book creator (beta)] Output path = test_datei_TextOnly_WordDetection > [INFO, Z39.86-2005 text-only book creator (beta)] Resource file = /home/eglic/tmp/pipeline-20120314/transformers/us_rfbd_textOnlyDtbCreator/toResource.res > [INFO, Z39.86-2005 text-only book creator (beta)] Configuration file = /home/eglic/tmp/pipeline-20120314/transformers/us_rfbd_textOnlyDtbCreator/toConfig.xml > [INFO, Z39.86-2005 text-only book creator (beta)] Reading config file > [INFO, Z39.86-2005 text-only book creator (beta)] Filtering input file to collect data, and add @id's and @smilRef's > [INFO, Z39.86-2005 text-only book creator (beta)] Handling SMIL links > [INFO, Z39.86-2005 text-only book creator (beta)] Generating SMIL files > [INFO, Z39.86-2005 text-only book creator (beta)] Generating NCX > [INFO, Z39.86-2005 text-only book creator (beta)] Copying files to output > [INFO, Z39.86-2005 text-only book creator (beta)] Generating OPF > [STATE] Transformer Z39.86-2005 text-only book creator (beta) just stopped > [STATE] Transformer Validator just started > [INFO_FINER, Validator] Validating a Z3986 DTB. > [INFO, Validator] Validating with ZedVal version 2.1 > [ERROR, Validator] An error occurred while validating the fileset: Cannot determine version of tmp.xml; missing doctype > . > [ERROR, Pipeline Core] Exceptions occurred during validation process. Transformer aborting. > org.daisy.pipeline.exception.JobFailedException: Exceptions occurred during validation process. Transformer aborting. > at org.daisy.pipeline.core.script.Runner.execute(Runner.java:116) > at org.daisy.pipeline.core.PipelineCore.execute(PipelineCore.java:280) > at org.daisy.pipeline.ui.CommandLineUI.main(CommandLineUI.java:213) > org.daisy.pipeline.exception.TransformerRunException: Exceptions occurred during validation process. Transformer aborting. > at int_daisy_validator.ValidatorDriver.execute(ValidatorDriver.java:318) > at org.daisy.pipeline.core.transformer.Transformer.executeWrapper(Transformer.java:174) > at org.daisy.pipeline.core.transformer.TransformerHandler.run(TransformerHandler.java:124) > at org.daisy.pipeline.core.script.Runner.execute(Runner.java:98) > at org.daisy.pipeline.core.PipelineCore.execute(PipelineCore.java:280) > at org.daisy.pipeline.ui.CommandLineUI.main(CommandLineUI.java:213) > > > Thanks > Christian > -- > Christian Egli > Swiss Library for the Blind, Visually Impaired and Print Disabled > Grubenstrasse 12, CH-8045 Zürich, Switzerland > ------------------------------------------------------------------------------ > Virtualization & Cloud Management Using Capacity Planning > Cloud computing makes use of virtualization - but cloud computing > also focuses on allowing computing to be delivered as a service. > http://www.accelacomm.com/jaw/sfnl/114/51521223/_______________________________________________ > Daisymfc-developer mailing list > Dai...@li... > https://lists.sourceforge.net/lists/listinfo/daisymfc-developer |
|
From: Christian E. <chr...@sb...> - 2012-03-14 10:29:02
|
Hi Alex We're planning to produce text-only DTBs using the DAISY pipeline. We're testing the two transformers, the one from you and the other one from RFBD. With respect to that I have two questions: 1) I remember that there was a discussion on the technical development list once about the differences between the two, however the archives[1] don't go back that far. From what I can see the one from RFBD seems to support math and sentence detection which yours seems to be lacking. >From your perspective: What are the strengths of your transformer and why are you using it over the other one? 2) Speaking of sentence detection, would it be possible to have sentence detection in your transformer? Thanks Christian Footnotes: [1] http://lyris.daisy.org/read/?forum=technical-developments -- Christian Egli Swiss Library for the Blind, Visually Impaired and Print Disabled Grubenstrasse 12, CH-8045 Zürich, Switzerland |
|
From: Romain D. <rde...@gm...> - 2012-01-02 12:59:52
|
For the record, the question has been cross posted and answered in the forums: http://www.daisy.org/forums/15709 Romain. On 29 déc. 2011, at 07:04, Nirmal Verma wrote: > hello list, > greetings and happy new year! > > While converting the RTF to XML through pipeline, i am not getting page numbers in the output XML. what is the method to get it. > > thanks and regards > > Nirmal Verma > ------------------------------------------------------------------------------ > Ridiculously easy VDI. With Citrix VDI-in-a-Box, you don't need a complex > infrastructure or vast IT resources to deliver seamless, secure access to > virtual desktops. With this all-in-one solution, easily deploy virtual > desktops for less than the cost of PCs and save 60% on VDI infrastructure > costs. Try it free! http://p.sf.net/sfu/Citrix-VDIinabox_______________________________________________ > Daisymfc-developer mailing list > Dai...@li... > https://lists.sourceforge.net/lists/listinfo/daisymfc-developer |
|
From: Nirmal V. <nir...@gm...> - 2011-12-29 06:04:45
|
hello list, greetings and happy new year! While converting the RTF to XML through pipeline, i am not getting page numbers in the output XML. what is the method to get it. thanks and regards Nirmal Verma |
|
From: Romain D. <rde...@gm...> - 2011-12-16 00:56:53
|
Dear all, A new maintenance release of the DAISY Pipeline (version 20111215) is now available for download at: http://www.daisy.org/pipeline/download The release includes: • alignment of the TTS Narrator with the version embedded in the just-released "Save as DAISY" add-in for Microsoft Word (2.5.5.1) • vastly improved the DTBook to LaTeX script used for large print production, by Christian Egli (SBS) • updated PEF-related scripts based on the Braille Utils and Dotify libraries, by Joel Håkansson (TPB) • a new "DTBook Volume Splitter" script, which takes a DTBook XML and inserts special div elements as split points, to be used by other scripts like "DTBook to LaTex", contributed by Christian Egli (SBS). This is a beta version and has only been tested on Linux. • bug fixes to several Pipeline scripts including the TTS Narrator, DTBook Fixer, DTBook to HTML. See the detailed release notes for more information: http://www.daisy.org/pipeline/release-notes Please use the DAISY Pipeline forum to give us feedback or request support: http://www.daisy.org/forums/pipeline Many thanks to all the contributors, testers and bug reporters! Best Regards, Romain. -- Romain Deltour, Software Developer The DAISY Consortium http://www.daisy.org |
|
From: Romain D. <rde...@gm...> - 2011-11-15 13:02:40
|
Hi all, I'm planning to publish a new maintenance release of the Pipeline 1 soon, including all the bug fixes and improvements since last March. Target release date would be either this Friday Nov 18th, or next Friday Nov 25th, depending on whether anyone here wants more time to push some commits. Please let me know! BR, Romain. |
|
From: Romain D. <rde...@gm...> - 2011-10-05 07:30:11
|
Dear all, The 1.0 release of the DAISY Pipeline 2 project is now available at: http://daisy-pipeline.googlecode.com/files/pipeline2-1.0.zip The full list of changes can be found at: http://code.google.com/p/daisy-pipeline/wiki/ReleaseNotes This major milestone marks the end of the first development phase, but not the end of the project, far from it! The second development phase is currently being re-chartered and will officially begin as soon as it is approved by the DAISY Board. It will bring many new features and intends to deliver steady improvements the current version. As usual comments and feedback are welcome! Feel free to contact us: * via the user forum: http://www.daisy.org/forums/pipeline2 * via the developers mailing list: http://groups.google.com/group/daisy-pipeline-dev?pli=1 * via a direct email to the project lead: rdeltour (at) gmail (dot) com Best regards, Romain. -- Romain Deltour, Software Developer DAISY Pipeline project lead The DAISY Consortium http://www.daisy.org |
|
From: Romain D. <rde...@gm...> - 2011-07-02 01:43:51
|
Dear all, A first beta release of the DAISY Pipeline 2 project (for the 1.0 version due in September 2011) is now available and can be downloaded at: http://daisy-pipeline.googlecode.com/files/pipeline2-1.0-beta1.zip The package includes: - a modular runtime framework for the Pipeline 2 modules, executable as a command line tool or via a REST web API. - a set of processing modules providing the following conversions: * dtbook-to-zedai - Convert DTBook XML to ZedAI XML * upgrade-dtbook - DTBook utility for upgrading to DTBook 2005-3. * merge-dtbook - DTBook utility for merging two or more files. * zedai-to-epub3 - ZedAI to EPUB 3 * daisy202-to-epub3 - DAISY 2.02 to EPUB3 - a set of sample documents to test the provided conversions Please be aware that this is beta software with many rough edges. An informative list of the known limitations is detailed in the README file included in the released package. From now on, the Pipeline 2 team will try and publish incremental releases every two weeks until the final release is made available in September 2011. If you wish to join the effort and contribute to the Pipeline 2 project, feel free to contact the project lead via email at rdeltour(at)gmail(dot)com or simply join us on the developers discussion list hosted on Google Groups: <http://groups.google.com/group/daisy-pipeline-dev> Best regards, Romain (for the Pipeline 2 developers team) -- Romain Deltour, Software Developer The DAISY Consortium http://www.daisy.org |
|
From: Christian E. <chr...@sb...> - 2011-05-10 08:25:55
|
joe...@us... writes: > > I cannot build trunk of dmfc. I get compile errors in PEF2Text.java. > added braille utils to build.xml jar list Ah, OK, thanks Joel. Now it compiles and all is well. Christian -- Christian Egli Swiss Library for the Blind, Visually Impaired and Print Disabled Grubenstrasse 12, CH-8045 Zürich, Switzerland ----- Die SBS laedt Sie herzlich ein: Tag der offenen Tuer am 25. Juni 2011 von 9 bis 16 Uhr. Mehr Informationen erhalten Sie unter http://www.sbs.ch/offenetuer |
|
From: Christian E. <chr...@sb...> - 2011-05-10 07:35:13
|
Hi all
I cannot build trunk of dmfc. I get compile errors in PEF2Text.java.
~/tmp/dmfc $ ant -f build-core.xml compile
Buildfile: /home/eglic/tmp/dmfc/build-core.xml
removeClasses:
[delete] Deleting directory /home/eglic/tmp/dmfc/bin/org
compile:
[javac] /home/eglic/tmp/dmfc/build-core.xml:202: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds
[javac] Compiling 556 source files to /home/eglic/tmp/dmfc/bin
[javac] /home/eglic/tmp/dmfc/src/org/daisy/pipeline/util/DocIndexGenerator.java:58: warning: com.sun.org.apache.xml.internal.serialize.OutputFormat is internal proprietary API and may be removed in a future release
[javac] import com.sun.org.apache.xml.internal.serialize.OutputFormat;
[javac] ^
[javac] /home/eglic/tmp/dmfc/src/org/daisy/pipeline/util/DocIndexGenerator.java:59: warning: com.sun.org.apache.xml.internal.serialize.XMLSerializer is internal proprietary API and may be removed in a future release
[javac] import com.sun.org.apache.xml.internal.serialize.XMLSerializer;
[javac] ^
[javac] /home/eglic/tmp/dmfc/src/org/daisy/pipeline/util/DocIndexGenerator.java:135: warning: com.sun.org.apache.xml.internal.serialize.OutputFormat is internal proprietary API and may be removed in a future release
[javac] OutputFormat outputFormat = new OutputFormat(doc);
[javac] ^
[javac] /home/eglic/tmp/dmfc/src/org/daisy/pipeline/util/DocIndexGenerator.java:135: warning: com.sun.org.apache.xml.internal.serialize.OutputFormat is internal proprietary API and may be removed in a future release
[javac] OutputFormat outputFormat = new OutputFormat(doc);
[javac] ^
[javac] /home/eglic/tmp/dmfc/src/org/daisy/pipeline/util/DocIndexGenerator.java:141: warning: com.sun.org.apache.xml.internal.serialize.XMLSerializer is internal proprietary API and may be removed in a future release
[javac] XMLSerializer serializer = new XMLSerializer(fout, outputFormat);
[javac] ^
[javac] /home/eglic/tmp/dmfc/src/org/daisy/pipeline/util/DocIndexGenerator.java:141: warning: com.sun.org.apache.xml.internal.serialize.XMLSerializer is internal proprietary API and may be removed in a future release
[javac] XMLSerializer serializer = new XMLSerializer(fout, outputFormat);
[javac] ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 warnings
[javac] /home/eglic/tmp/dmfc/build-core.xml:208: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds
[javac] Compiling 589 source files to /home/eglic/tmp/dmfc/transformers
[javac] /home/eglic/tmp/dmfc/transformers/int_daisy_dtbMigrator/impl/d202_z2005/MigratorImpl.java:97: warning: com.sun.org.apache.xml.internal.serialize.OutputFormat is internal proprietary API and may be removed in a future release
[javac] import com.sun.org.apache.xml.internal.serialize.OutputFormat;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/int_daisy_dtbMigrator/impl/d202_z2005/MigratorImpl.java:98: warning: com.sun.org.apache.xml.internal.serialize.XMLSerializer is internal proprietary API and may be removed in a future release
[javac] import com.sun.org.apache.xml.internal.serialize.XMLSerializer;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:12: package org.daisy.braille.embosser does not exist
[javac] import org.daisy.braille.embosser.Embosser;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:13: package org.daisy.braille.embosser does not exist
[javac] import org.daisy.braille.embosser.EmbosserCatalog;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:14: package org.daisy.braille.embosser does not exist
[javac] import org.daisy.braille.embosser.EmbosserFeatures;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:15: package org.daisy.braille.embosser does not exist
[javac] import org.daisy.braille.embosser.EmbosserWriter;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:16: package org.daisy.braille.embosser does not exist
[javac] import org.daisy.braille.embosser.UnsupportedWidthException;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:17: package org.daisy.braille.facade does not exist
[javac] import org.daisy.braille.facade.PEFConverterFacade;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:18: package org.daisy.braille.pef does not exist
[javac] import org.daisy.braille.pef.PEFHandler;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:19: package org.daisy.braille.pef does not exist
[javac] import org.daisy.braille.pef.Range;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:20: package org.daisy.paper does not exist
[javac] import org.daisy.paper.PageFormat;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:21: package org.daisy.paper does not exist
[javac] import org.daisy.paper.PaperCatalog;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:25: package org.daisy.printing does not exist
[javac] import org.daisy.printing.PrinterDevice;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:180: cannot find symbol
[javac] symbol : class Embosser
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] private void convert(File input, File output, Embosser em, Range rangeObj, String paperWidthFallback, boolean align, int offset) throws TransformerRunException, UnsupportedWidthException {
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:180: cannot find symbol
[javac] symbol : class Range
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] private void convert(File input, File output, Embosser em, Range rangeObj, String paperWidthFallback, boolean align, int offset) throws TransformerRunException, UnsupportedWidthException {
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:180: cannot find symbol
[javac] symbol : class UnsupportedWidthException
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] private void convert(File input, File output, Embosser em, Range rangeObj, String paperWidthFallback, boolean align, int offset) throws TransformerRunException, UnsupportedWidthException {
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:189: cannot find symbol
[javac] symbol : class EmbosserWriter
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] private void convert(File input, EmbosserWriter embosserObj, Range rangeObj, String paperWidthFallback, boolean align, int offset) throws TransformerRunException, UnsupportedWidthException {
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:189: cannot find symbol
[javac] symbol : class Range
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] private void convert(File input, EmbosserWriter embosserObj, Range rangeObj, String paperWidthFallback, boolean align, int offset) throws TransformerRunException, UnsupportedWidthException {
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:189: cannot find symbol
[javac] symbol : class UnsupportedWidthException
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] private void convert(File input, EmbosserWriter embosserObj, Range rangeObj, String paperWidthFallback, boolean align, int offset) throws TransformerRunException, UnsupportedWidthException {
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/text2pef/Text2PEF.java:8: package org.daisy.braille.pef does not exist
[javac] import org.daisy.braille.pef.TextHandler;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/int_daisy_dtbMigrator/impl/d202_z2005/MigratorImpl.java:647: warning: com.sun.org.apache.xml.internal.serialize.OutputFormat is internal proprietary API and may be removed in a future release
[javac] OutputFormat outputFormat = new OutputFormat(opfDom);
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/int_daisy_dtbMigrator/impl/d202_z2005/MigratorImpl.java:647: warning: com.sun.org.apache.xml.internal.serialize.OutputFormat is internal proprietary API and may be removed in a future release
[javac] OutputFormat outputFormat = new OutputFormat(opfDom);
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/int_daisy_dtbMigrator/impl/d202_z2005/MigratorImpl.java:654: warning: com.sun.org.apache.xml.internal.serialize.XMLSerializer is internal proprietary API and may be removed in a future release
[javac] XMLSerializer serializer = new XMLSerializer(fout, outputFormat);
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/int_daisy_dtbMigrator/impl/d202_z2005/MigratorImpl.java:654: warning: com.sun.org.apache.xml.internal.serialize.XMLSerializer is internal proprietary API and may be removed in a future release
[javac] XMLSerializer serializer = new XMLSerializer(fout, outputFormat);
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:65: cannot find symbol
[javac] symbol : class Range
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] Range rangeObj = null;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:66: cannot find symbol
[javac] symbol : class EmbosserCatalog
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] EmbosserCatalog ef = EmbosserCatalog.newInstance();
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:66: cannot find symbol
[javac] symbol : variable EmbosserCatalog
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] EmbosserCatalog ef = EmbosserCatalog.newInstance();
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:67: cannot find symbol
[javac] symbol : class Embosser
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] Embosser em = null;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:74: cannot find symbol
[javac] symbol : class PaperCatalog
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] PaperCatalog pc = PaperCatalog.newInstance();
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:74: cannot find symbol
[javac] symbol : variable PaperCatalog
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] PaperCatalog pc = PaperCatalog.newInstance();
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:75: cannot find symbol
[javac] symbol : variable EmbosserFeatures
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] em.setFeature(EmbosserFeatures.PAGE_FORMAT, new PageFormat(pc.get(papersize)));
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:75: cannot find symbol
[javac] symbol : class PageFormat
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] em.setFeature(EmbosserFeatures.PAGE_FORMAT, new PageFormat(pc.get(papersize)));
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:79: cannot find symbol
[javac] symbol : variable Range
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] rangeObj = Range.parseRange(range);
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:81: cannot find symbol
[javac] symbol : variable EmbosserFeatures
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] em.setFeature(EmbosserFeatures.TABLE, table);
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:98: cannot find symbol
[javac] symbol : class PrinterDevice
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] PrinterDevice bd = null;
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:100: cannot find symbol
[javac] symbol : class PrinterDevice
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] bd = new PrinterDevice(deviceName, true);
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:108: cannot find symbol
[javac] symbol : class EmbosserWriter
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] EmbosserWriter embosserObj = em.newEmbosserWriter(bd);
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:116: cannot find symbol
[javac] symbol : class EmbosserWriter
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] EmbosserWriter embosserObj = em.newEmbosserWriter(os);
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:126: cannot find symbol
[javac] symbol : class UnsupportedWidthException
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] } catch (UnsupportedWidthException e2) {
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:131: cannot find symbol
[javac] symbol : class UnsupportedWidthException
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] } catch (UnsupportedWidthException e) {
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:159: cannot find symbol
[javac] symbol : class UnsupportedWidthException
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] } catch (UnsupportedWidthException e) {
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:182: cannot find symbol
[javac] symbol : class EmbosserWriter
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] EmbosserWriter embosserObj = em.newEmbosserWriter(new FileOutputStream(output));
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:191: cannot find symbol
[javac] symbol : class PEFHandler
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] PEFHandler ph = new PEFHandler.Builder(embosserObj)
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:191: package PEFHandler does not exist
[javac] PEFHandler ph = new PEFHandler.Builder(embosserObj)
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:197: cannot find symbol
[javac] symbol : variable PEFConverterFacade
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] PEFConverterFacade.parsePefFile(input, ph);
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/pef2text/PEF2Text.java:202: cannot find symbol
[javac] symbol : class UnsupportedWidthException
[javac] location: class org_pef_text.pef2text.PEF2Text
[javac] } catch (UnsupportedWidthException e) {
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/text2pef/Text2PEF.java:48: package TextHandler does not exist
[javac] TextHandler.Builder builder = new TextHandler.Builder(input, output);
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/text2pef/Text2PEF.java:48: package TextHandler does not exist
[javac] TextHandler.Builder builder = new TextHandler.Builder(input, output);
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/org_pef_text/text2pef/Text2PEF.java:64: cannot find symbol
[javac] symbol : class TextHandler
[javac] location: class org_pef_text.text2pef.Text2PEF
[javac] TextHandler tp = builder.build();
[javac] ^
[javac] /home/eglic/tmp/dmfc/transformers/se_tpb_speechgenerator/TTSBuilder.java:245: warning: non-varargs call of varargs method with inexact argument type for last parameter;
[javac] cast to java.lang.Object for a varargs call
[javac] cast to java.lang.Object[] for a non-varargs call and to suppress this warning
[javac] tts = (TTS) constructor.newInstance(constrParam);
[javac] ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 43 errors
[javac] 7 warnings
BUILD FAILED
/home/eglic/tmp/dmfc/build-core.xml:208: Compile failed; see the compiler error output for details.
Total time: 5 seconds
~/tmp/dmfc $
--
Christian Egli
Swiss Library for the Blind, Visually Impaired and Print Disabled
Grubenstrasse 12, CH-8045 Zürich, Switzerland
-----
Die SBS laedt Sie herzlich ein:
Tag der offenen Tuer am 25. Juni 2011 von 9 bis 16 Uhr.
Mehr Informationen erhalten Sie unter http://www.sbs.ch/offenetuer
|
|
From: Christian E. <chr...@sb...> - 2011-05-09 10:25:23
|
Hi Dave
Dave Pawson <dav...@gm...> writes:
> I'd be cautious about counting paragraphs.
I know, it's a coarse approximation which also satisfies my requirement
that I have to partition at paragraph boundaries.
> then recurse through the book looking for count / partitions?
The current code is very simple and elegant (see below). I'll have to
think how I could implement your suggestion.
The split code is essentially just as follows:
<xsl:variable name="all_p" select="//dtb:p"/>
<xsl:variable name="p_per_volume" select="ceiling(count($all_p) div $number_of_volumes)"/>
<xsl:variable name="split_nodes" select="$all_p[position() mod $p_per_volume = 0]"/>
<xsl:template match="dtb:p">
<xsl:if test="some $node in $split_nodes satisfies current() is $node">
<xsl:element name="div" namespace="http://www.daisy.org/z3986/2005/dtbook/">
<xsl:attribute name="class">large-print-volume-split</xsl:attribute>
<xsl:element name="p" namespace="http://www.daisy.org/z3986/2005/dtbook/"/>
</xsl:element>
</xsl:if>
<xsl:copy>
<xsl:apply-templates select="@*|node()"/>
</xsl:copy>
</xsl:template>
Thanks
Christian
--
Christian Egli
Swiss Library for the Blind, Visually Impaired and Print Disabled
Grubenstrasse 12, CH-8045 Zürich, Switzerland
-----
Die SBS laedt Sie herzlich ein:
Tag der offenen Tuer am 25. Juni 2011 von 9 bis 16 Uhr.
Mehr Informationen erhalten Sie unter http://www.sbs.ch/offenetuer
|
|
From: Dave P. <dav...@gm...> - 2011-05-09 10:06:27
|
On 9 May 2011 10:52, Christian Egli <chr...@sb...> wrote: > Hi all > > I've implemented a volume splitter which can be used for large print > volume splitting. It is a simple xslt based transformer which given a > number of volumes inserts split points in a DTBook XML. It calculates > the number of p's in a DTBook and inserts <div > class="large-print-volume-split"/> in the DTBook[1]. Any transformer > could then take specific actions based on this div element. The large > print transformer for example inserts a volume cover page. I'd be cautious about counting paragraphs. try this for a wc. <xsl:template match="/" name='main'> <xsl:value-of select="count(tokenize(normalize-space(.), '[ 
	]'))"/> </xsl:template> then recurse through the book looking for count / partitions? HTH -- Dave Pawson XSLT XSL-FO FAQ. Docbook FAQ. http://www.dpawson.co.uk |
|
From: Christian E. <chr...@sb...> - 2011-05-09 09:52:34
|
Hi all
I've implemented a volume splitter which can be used for large print
volume splitting. It is a simple xslt based transformer which given a
number of volumes inserts split points in a DTBook XML. It calculates
the number of p's in a DTBook and inserts <div
class="large-print-volume-split"/> in the DTBook[1]. Any transformer
could then take specific actions based on this div element. The large
print transformer for example inserts a volume cover page.
Previously I had reservations about checking this transformer in because
I thought that I needed to invoke LaTeX, which was a no-go at the time
due to the non-accessible PDF produced by LaTeX. However this
transformer is very generic and doesn't even know there is such a thing
as LaTeX. So from that POV it should be no problem to add this
transformer to the repository.
In order to use the transformer you'll need to figure out how many
volumes you want to split your book into. You could do this manually by
invoking LaTeX beforehand and checking how many pages the book will
have. This however is not implemented in the transformer and an
organization will have to implement this separately (or ask for a Python
based implementation).
So, in essence I'm asking if anyone is against me adding my volume
splitter to the svn repo.
Thanks
Christian
Footnotes:
[1] Now that I think about it, this class should probably be simply
called "volume-split", as this transformer could also be used for
braille.
--
Christian Egli
Swiss Library for the Blind, Visually Impaired and Print Disabled
Grubenstrasse 12, CH-8045 Zürich, Switzerland
-----
Die SBS laedt Sie herzlich ein:
Tag der offenen Tuer am 25. Juni 2011 von 9 bis 16 Uhr.
Mehr Informationen erhalten Sie unter http://www.sbs.ch/offenetuer
|
|
From: Christian E. <chr...@sb...> - 2011-05-06 07:23:11
|
Hi Romain Romain Deltour <rde...@gm...> writes: > It sounds like the proposed changes are still compatible with all > major platforms, so it's fine by me. Yes, they should be. The memoir class comes packaged with Tex Live and MiKTeX[1]. MiKTeX is available for Windows and TeX Live is available for all Unixen, Mac OSX and also Windows[2]. The changes make many things easier and more robust. Expect a commit soon. Thanks Footnotes: [1] http://www.ctan.org/pkg/memoir [2] http://www.tug.org/texlive/ -- Christian Egli Swiss Library for the Blind, Visually Impaired and Print Disabled Grubenstrasse 12, CH-8045 Zürich, Switzerland ----- Die SBS laedt Sie herzlich ein: Tag der offenen Tuer am 25. Juni 2011 von 9 bis 16 Uhr. Mehr Informationen erhalten Sie unter http://www.sbs.ch/offenetuer |
|
From: Romain D. <rde...@gm...> - 2011-05-05 14:26:46
|
Hi Christian, It sounds like the proposed changes are still compatible with all major platforms, so it's fine by me. Note that I'm not very familiar with LaTeX though. Cheers, Romain. Le 2 mai 11 à 12:29, Christian Egli a écrit : > Hi all > > We want to increase our large print offering considerably here at SBS. > In the course of doing this I want to give the dtbook2latex an > overhaul. > In the past I refrained from requiring "exotic" latex packages. I just > required the absolute minimum. However in the long run this makes > things > more complicated than they ought to be. I would like to use the memoir > package which contains a number of other stuff which I was using > previously. Also the memoir class allows me to offer bigger font sizes > than 20pt. The memoir class is contained in texlive which is available > for all platforms, so this should not really be a problem for any > users. > > So before I go ahead I just wanted to make sure there was no > fundamental > opposition to migrate the dtbook2latex transformer to using the memoir > class (as opposed to extbook, geometry, fancyhdr, titlesec, titletoc > and > setspace). > > Thanks > Christian > -- > Christian Egli > Swiss Library for the Blind, Visually Impaired and Print Disabled > Grubenstrasse 12, CH-8045 Zürich, Switzerland > > ----- > Die SBS laedt Sie herzlich ein: > Tag der offenen Tuer am 25. Juni 2011 von 9 bis 16 Uhr. > Mehr Informationen erhalten Sie unter http://www.sbs.ch/offenetuer > > ------------------------------------------------------------------------------ > WhatsUp Gold - Download Free Network Management Software > The most intuitive, comprehensive, and cost-effective network > management toolset available today. Delivers lowest initial > acquisition cost and overall TCO of any competing solution. > http://p.sf.net/sfu/whatsupgold-sd > _______________________________________________ > Daisymfc-developer mailing list > Dai...@li... > https://lists.sourceforge.net/lists/listinfo/daisymfc-developer |